Nov 25 20:26:57 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 20:26:57 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:57 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 20:26:58 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 20:26:59 crc kubenswrapper[4983]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 20:26:59 crc kubenswrapper[4983]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 20:26:59 crc kubenswrapper[4983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 20:26:59 crc kubenswrapper[4983]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 20:26:59 crc kubenswrapper[4983]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 20:26:59 crc kubenswrapper[4983]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.324597 4983 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337827 4983 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337898 4983 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337909 4983 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337918 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337927 4983 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337940 4983 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337951 4983 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337960 4983 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337970 4983 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337979 4983 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337989 4983 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.337998 4983 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338006 4983 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338014 4983 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338024 4983 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338070 4983 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338082 4983 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338093 4983 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338101 4983 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338109 4983 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338117 4983 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338128 4983 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338138 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338147 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338154 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338162 4983 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338172 4983 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338179 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338187 4983 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338195 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338203 4983 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338211 4983 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338219 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338227 4983 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338234 4983 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338243 4983 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338250 4983 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338260 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338270 4983 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338279 4983 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338286 4983 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338295 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338304 4983 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338312 4983 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338320 4983 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338328 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338337 4983 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338346 4983 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338353 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338361 4983 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338369 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338376 4983 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338385 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338392 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338400 4983 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338407 4983 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338419 4983 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338429 4983 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338438 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338447 4983 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338455 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338463 4983 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338470 4983 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338479 4983 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338487 4983 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338494 4983 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338502 4983 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338510 4983 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338518 4983 feature_gate.go:330] unrecognized feature gate: Example Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338527 4983 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.338537 4983 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338792 4983 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338826 4983 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338855 4983 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338876 4983 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338893 4983 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338905 4983 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338922 4983 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338937 4983 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338950 4983 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338962 4983 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338974 4983 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.338993 4983 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339007 4983 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339020 4983 flags.go:64] FLAG: --cgroup-root="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339031 4983 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339043 4983 flags.go:64] FLAG: --client-ca-file="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339055 4983 flags.go:64] FLAG: --cloud-config="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339065 4983 flags.go:64] FLAG: --cloud-provider="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339074 4983 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339086 4983 flags.go:64] FLAG: --cluster-domain="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339096 4983 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339105 4983 flags.go:64] FLAG: --config-dir="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339115 4983 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339125 4983 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339137 4983 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339147 4983 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339157 4983 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339167 4983 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339177 4983 flags.go:64] FLAG: --contention-profiling="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339186 4983 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339195 4983 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339204 4983 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339213 4983 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339225 4983 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339235 4983 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339245 4983 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339254 4983 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339263 4983 flags.go:64] FLAG: --enable-server="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339272 4983 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339288 4983 flags.go:64] FLAG: --event-burst="100" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339297 4983 flags.go:64] FLAG: --event-qps="50" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339307 4983 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339316 4983 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339325 4983 flags.go:64] FLAG: --eviction-hard="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339337 4983 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339346 4983 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339355 4983 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339365 4983 flags.go:64] FLAG: --eviction-soft="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339374 4983 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339384 4983 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339393 4983 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339402 4983 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339424 4983 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339434 4983 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339443 4983 flags.go:64] FLAG: --feature-gates="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339455 4983 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339464 4983 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339474 4983 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339484 4983 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339493 4983 flags.go:64] FLAG: --healthz-port="10248" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339503 4983 flags.go:64] FLAG: --help="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339519 4983 flags.go:64] FLAG: --hostname-override="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339528 4983 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339538 4983 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339548 4983 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339594 4983 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339603 4983 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339613 4983 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339622 4983 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339631 4983 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339640 4983 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339651 4983 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339662 4983 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339671 4983 flags.go:64] FLAG: --kube-reserved="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339681 4983 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339691 4983 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339700 4983 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339710 4983 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339720 4983 flags.go:64] FLAG: --lock-file="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339729 4983 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339738 4983 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339749 4983 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339777 4983 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339794 4983 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339808 4983 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339817 4983 flags.go:64] FLAG: --logging-format="text" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339828 4983 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339842 4983 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339854 4983 flags.go:64] FLAG: --manifest-url="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339866 4983 flags.go:64] FLAG: --manifest-url-header="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339881 4983 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339894 4983 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339906 4983 flags.go:64] FLAG: --max-pods="110" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339924 4983 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339934 4983 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339943 4983 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339953 4983 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339963 4983 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339972 4983 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.339982 4983 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340007 4983 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340016 4983 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340026 4983 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340036 4983 flags.go:64] FLAG: --pod-cidr="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340044 4983 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340059 4983 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340068 4983 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340078 4983 flags.go:64] FLAG: --pods-per-core="0" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340087 4983 flags.go:64] FLAG: --port="10250" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340097 4983 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340106 4983 flags.go:64] FLAG: --provider-id="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340115 4983 flags.go:64] FLAG: --qos-reserved="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340124 4983 flags.go:64] FLAG: --read-only-port="10255" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340134 4983 flags.go:64] FLAG: --register-node="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340144 4983 flags.go:64] FLAG: --register-schedulable="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340153 4983 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340169 4983 flags.go:64] FLAG: --registry-burst="10" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340183 4983 flags.go:64] FLAG: --registry-qps="5" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340193 4983 flags.go:64] FLAG: --reserved-cpus="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340204 4983 flags.go:64] FLAG: --reserved-memory="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340217 4983 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340227 4983 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340238 4983 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340247 4983 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340256 4983 flags.go:64] FLAG: --runonce="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340269 4983 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340279 4983 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340289 4983 flags.go:64] FLAG: --seccomp-default="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340298 4983 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340307 4983 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340317 4983 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340327 4983 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340338 4983 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340347 4983 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340356 4983 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340365 4983 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340374 4983 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340384 4983 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340394 4983 flags.go:64] FLAG: --system-cgroups="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340403 4983 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340418 4983 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340427 4983 flags.go:64] FLAG: --tls-cert-file="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340436 4983 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340447 4983 flags.go:64] FLAG: --tls-min-version="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340456 4983 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340465 4983 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340474 4983 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340484 4983 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340494 4983 flags.go:64] FLAG: --v="2" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340508 4983 flags.go:64] FLAG: --version="false" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340521 4983 flags.go:64] FLAG: --vmodule="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340534 4983 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.340544 4983 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340822 4983 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340836 4983 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340850 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340861 4983 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340877 4983 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340889 4983 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340897 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340905 4983 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340914 4983 feature_gate.go:330] unrecognized feature gate: Example Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340923 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340932 4983 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340940 4983 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340948 4983 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340956 4983 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340964 4983 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340975 4983 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340985 4983 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.340995 4983 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341003 4983 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341013 4983 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341023 4983 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341032 4983 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341041 4983 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341049 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341058 4983 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341067 4983 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341076 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341085 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341093 4983 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341103 4983 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341112 4983 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341121 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341132 4983 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341141 4983 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341149 4983 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341157 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341169 4983 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341177 4983 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341186 4983 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341194 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341203 4983 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341211 4983 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341219 4983 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341226 4983 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341234 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341242 4983 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341250 4983 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341258 4983 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341266 4983 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341274 4983 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341282 4983 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341289 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341297 4983 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341305 4983 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341313 4983 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341320 4983 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341328 4983 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341336 4983 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341344 4983 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341353 4983 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341361 4983 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341369 4983 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341377 4983 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341385 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341393 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341400 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341408 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341418 4983 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341432 4983 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341442 4983 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.341451 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.341467 4983 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.355127 4983 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.355199 4983 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355329 4983 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355340 4983 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355346 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355352 4983 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355359 4983 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355364 4983 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355369 4983 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355377 4983 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355387 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355395 4983 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355404 4983 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355411 4983 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355417 4983 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355423 4983 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355429 4983 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355435 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355441 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355446 4983 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355452 4983 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355457 4983 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355463 4983 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355470 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355475 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355481 4983 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355487 4983 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355492 4983 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355497 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355502 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355507 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355511 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355516 4983 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355522 4983 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355527 4983 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355532 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355539 4983 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355544 4983 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355550 4983 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355579 4983 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355584 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355589 4983 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355594 4983 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355599 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355605 4983 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355614 4983 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355620 4983 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355625 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355630 4983 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355636 4983 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355641 4983 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355646 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355651 4983 feature_gate.go:330] unrecognized feature gate: Example Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355656 4983 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355661 4983 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355666 4983 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355673 4983 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355680 4983 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355685 4983 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355691 4983 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355697 4983 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355702 4983 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355707 4983 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355712 4983 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355718 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355723 4983 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355728 4983 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355733 4983 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355739 4983 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355744 4983 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355749 4983 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355754 4983 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355760 4983 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.355770 4983 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355947 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355955 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355961 4983 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355967 4983 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355973 4983 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355980 4983 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355987 4983 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355992 4983 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.355998 4983 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356003 4983 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356008 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356013 4983 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356018 4983 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356023 4983 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356028 4983 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356033 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356039 4983 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356044 4983 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356050 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356056 4983 feature_gate.go:330] unrecognized feature gate: Example Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356061 4983 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356069 4983 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356075 4983 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356081 4983 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356087 4983 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356092 4983 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356097 4983 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356102 4983 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356108 4983 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356114 4983 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356120 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356125 4983 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356131 4983 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356137 4983 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356144 4983 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356151 4983 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356158 4983 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356163 4983 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356169 4983 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356174 4983 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356179 4983 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356185 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356191 4983 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356196 4983 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356201 4983 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356206 4983 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356212 4983 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356217 4983 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356222 4983 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356227 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356232 4983 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356237 4983 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356242 4983 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356248 4983 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356253 4983 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356258 4983 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356263 4983 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356270 4983 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356276 4983 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356281 4983 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356287 4983 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356292 4983 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356297 4983 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356302 4983 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356309 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356315 4983 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356321 4983 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356331 4983 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356339 4983 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356347 4983 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.356355 4983 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.356366 4983 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.356678 4983 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.362300 4983 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.365575 4983 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.367203 4983 server.go:997] "Starting client certificate rotation" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.367241 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.368493 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-14 11:00:05.644401454 +0000 UTC Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.368718 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.398879 4983 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.403598 4983 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.403529 4983 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.426894 4983 log.go:25] "Validated CRI v1 runtime API" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.470737 4983 log.go:25] "Validated CRI v1 image API" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.474985 4983 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.482039 4983 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-20-22-18-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.482103 4983 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.514791 4983 manager.go:217] Machine: {Timestamp:2025-11-25 20:26:59.510650184 +0000 UTC m=+0.623183676 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:624587ca-b3c3-41fb-b4fb-210ed293ff8f BootID:f7a9b540-24a4-4342-97be-ae514f2fa363 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8e:06:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8e:06:82 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:86:67:ef Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b6:77:95 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:15:f0:7f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:eb:e0:08 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:27:6b:01:aa:36 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ea:ec:4f:b5:37:da Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.515275 4983 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.515628 4983 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.517147 4983 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.517505 4983 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.517653 4983 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.518056 4983 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.518076 4983 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.518659 4983 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.518753 4983 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.519109 4983 state_mem.go:36] "Initialized new in-memory state store" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.519273 4983 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.525471 4983 kubelet.go:418] "Attempting to sync node with API server" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.525531 4983 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.525625 4983 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.525659 4983 kubelet.go:324] "Adding apiserver pod source" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.525691 4983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.532210 4983 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.532764 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.532790 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.533001 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.533054 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.534147 4983 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.536680 4983 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538850 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538882 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538891 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538901 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538915 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538924 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538933 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538949 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538962 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538972 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.538986 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.539014 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.541324 4983 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.543657 4983 server.go:1280] "Started kubelet" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.544392 4983 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.544490 4983 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.546348 4983 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:26:59 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.551069 4983 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552062 4983 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552121 4983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552178 4983 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552196 4983 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552217 4983 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:46:27.59790376 +0000 UTC Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552288 4983 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 849h19m28.045619348s for next certificate rotation Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.552370 4983 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.552651 4983 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.553233 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.553535 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.553839 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.555763 4983 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.555788 4983 factory.go:55] Registering systemd factory Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.555798 4983 factory.go:221] Registration of the systemd container factory successfully Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.556061 4983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b59cdd0472cb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 20:26:59.543174328 +0000 UTC m=+0.655707720,LastTimestamp:2025-11-25 20:26:59.543174328 +0000 UTC m=+0.655707720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560388 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560435 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560448 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560464 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560476 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560487 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560499 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560510 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560523 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560534 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560538 4983 server.go:460] "Adding debug handlers to kubelet server" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560544 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560574 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560587 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560600 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560611 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.560981 4983 factory.go:153] Registering CRI-O factory Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.561011 4983 factory.go:221] Registration of the crio container factory successfully Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.561050 4983 factory.go:103] Registering Raw factory Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.561068 4983 manager.go:1196] Started watching for new ooms in manager Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.561717 4983 manager.go:319] Starting recovery of all containers Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562508 4983 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562610 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562626 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562642 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562655 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562667 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562683 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562694 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562708 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562724 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562739 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562755 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562772 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562787 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562800 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562820 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562834 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562849 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562865 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562908 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562921 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562935 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562949 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562974 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.562990 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563007 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563021 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563034 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563047 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563059 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563072 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563085 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563098 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563113 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563132 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563144 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563156 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563168 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563184 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563197 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563209 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563222 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563236 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563257 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563274 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563286 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563298 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563311 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563324 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563336 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563353 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563369 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563381 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563393 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563404 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563416 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563427 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563454 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563475 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563489 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563672 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563694 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563770 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563789 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563801 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563826 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563838 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563868 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563883 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563895 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563908 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563920 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563931 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563961 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.563973 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564001 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564018 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564030 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564042 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564079 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564092 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564129 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564161 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564200 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564212 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564224 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564235 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564247 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564264 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564327 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564412 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564448 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564465 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564483 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564504 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564518 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564531 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564580 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564593 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564653 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564699 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564719 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564738 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564793 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564902 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.564988 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565030 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565041 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565052 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565064 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565075 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565086 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565097 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565129 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565158 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565169 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565187 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565242 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565262 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565315 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565328 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565361 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565391 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565500 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565515 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565526 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565542 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565575 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565593 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565627 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565639 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565669 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565719 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565739 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565813 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565826 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565845 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.565944 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566007 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566029 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566044 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566055 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566066 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566077 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.566090 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567388 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567433 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567470 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567492 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567525 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567549 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567597 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567628 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567648 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567678 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567700 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567722 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567750 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567771 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567803 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567827 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567848 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567877 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567898 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567926 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567948 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.567971 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568001 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568023 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568046 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568073 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568094 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568121 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568144 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568165 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568191 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568212 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568242 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568264 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568287 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568316 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568339 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568382 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568405 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568428 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568454 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568475 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568502 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568523 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568547 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568604 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568626 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568656 4983 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568676 4983 reconstruct.go:97] "Volume reconstruction finished" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.568689 4983 reconciler.go:26] "Reconciler: start to sync state" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.585673 4983 manager.go:324] Recovery completed Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.597796 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.599721 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.599854 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.599883 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.601172 4983 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.601217 4983 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.601269 4983 state_mem.go:36] "Initialized new in-memory state store" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.601632 4983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.603546 4983 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.603704 4983 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.603739 4983 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.603918 4983 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 20:26:59 crc kubenswrapper[4983]: W1125 20:26:59.605287 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.605337 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.615286 4983 policy_none.go:49] "None policy: Start" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.616202 4983 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.616241 4983 state_mem.go:35] "Initializing new in-memory state store" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.652431 4983 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.665667 4983 manager.go:334] "Starting Device Plugin manager" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.665821 4983 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.665841 4983 server.go:79] "Starting device plugin registration server" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.666311 4983 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.666378 4983 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.666682 4983 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.666824 4983 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.666840 4983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.675926 4983 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.704373 4983 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.704458 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.706625 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.706660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.706673 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.706837 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707114 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707221 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707464 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707493 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707621 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707790 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.707835 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708249 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708267 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708275 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708347 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708475 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708498 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708892 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708910 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708918 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.708999 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709038 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709056 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709167 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709533 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709550 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709578 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709670 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709807 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.709852 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.710234 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.710269 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.710281 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.710418 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.710460 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.711256 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.711277 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.711299 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.711318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.711324 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.711327 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.754866 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.766833 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.768160 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.768225 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.768237 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.768269 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.769129 4983 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.771462 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.771594 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.771732 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.771814 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.771883 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.771957 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772029 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772099 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772174 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772271 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772363 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772477 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772605 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772722 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.772810 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874320 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874389 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874414 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874455 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874479 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874501 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874519 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874540 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874580 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874607 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874621 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874657 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874678 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874720 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874753 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874771 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874782 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874805 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874782 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874848 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874892 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874905 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874917 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874915 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874953 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.875012 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.875014 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.874821 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.875100 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.875115 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.970344 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.972369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.972410 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.972419 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:26:59 crc kubenswrapper[4983]: I1125 20:26:59.972444 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:26:59 crc kubenswrapper[4983]: E1125 20:26:59.972942 4983 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.035451 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.051247 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.078072 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.084391 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-58d22ccbb7aa19ba4994cf8abeaffd513b879dd24fccad0336a9a95ccd139622 WatchSource:0}: Error finding container 58d22ccbb7aa19ba4994cf8abeaffd513b879dd24fccad0336a9a95ccd139622: Status 404 returned error can't find the container with id 58d22ccbb7aa19ba4994cf8abeaffd513b879dd24fccad0336a9a95ccd139622 Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.088730 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-63e0342e03ec4c99b4f40789820942668690c9782e1e99addcd8584117471130 WatchSource:0}: Error finding container 63e0342e03ec4c99b4f40789820942668690c9782e1e99addcd8584117471130: Status 404 returned error can't find the container with id 63e0342e03ec4c99b4f40789820942668690c9782e1e99addcd8584117471130 Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.091688 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.099197 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.099702 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bcb012689f0b6ef1e16ca18c87abd239866b358ddc07672bb690aade899c646a WatchSource:0}: Error finding container bcb012689f0b6ef1e16ca18c87abd239866b358ddc07672bb690aade899c646a: Status 404 returned error can't find the container with id bcb012689f0b6ef1e16ca18c87abd239866b358ddc07672bb690aade899c646a Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.108412 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c7e8c2259ee8c9bf269855293e03746e9ef204739f697851a8e0b53c098c07a8 WatchSource:0}: Error finding container c7e8c2259ee8c9bf269855293e03746e9ef204739f697851a8e0b53c098c07a8: Status 404 returned error can't find the container with id c7e8c2259ee8c9bf269855293e03746e9ef204739f697851a8e0b53c098c07a8 Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.116687 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d705a46266dded07cd4ba2c6e6fd59f3545d2dc0ca2310125b430873a85116d3 WatchSource:0}: Error finding container d705a46266dded07cd4ba2c6e6fd59f3545d2dc0ca2310125b430873a85116d3: Status 404 returned error can't find the container with id d705a46266dded07cd4ba2c6e6fd59f3545d2dc0ca2310125b430873a85116d3 Nov 25 20:27:00 crc kubenswrapper[4983]: E1125 20:27:00.156361 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.373735 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.375510 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.375581 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.375594 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.375621 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:27:00 crc kubenswrapper[4983]: E1125 20:27:00.376183 4983 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.547240 4983 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.609888 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d705a46266dded07cd4ba2c6e6fd59f3545d2dc0ca2310125b430873a85116d3"} Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.611447 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c7e8c2259ee8c9bf269855293e03746e9ef204739f697851a8e0b53c098c07a8"} Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.612512 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bcb012689f0b6ef1e16ca18c87abd239866b358ddc07672bb690aade899c646a"} Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.613684 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"58d22ccbb7aa19ba4994cf8abeaffd513b879dd24fccad0336a9a95ccd139622"} Nov 25 20:27:00 crc kubenswrapper[4983]: I1125 20:27:00.614579 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"63e0342e03ec4c99b4f40789820942668690c9782e1e99addcd8584117471130"} Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.760262 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:00 crc kubenswrapper[4983]: E1125 20:27:00.760360 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.917793 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:00 crc kubenswrapper[4983]: E1125 20:27:00.917943 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:27:00 crc kubenswrapper[4983]: E1125 20:27:00.957693 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Nov 25 20:27:00 crc kubenswrapper[4983]: W1125 20:27:00.995112 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:00 crc kubenswrapper[4983]: E1125 20:27:00.995275 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:27:01 crc kubenswrapper[4983]: W1125 20:27:01.025517 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:01 crc kubenswrapper[4983]: E1125 20:27:01.025693 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.177290 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.180030 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.180086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.180101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.180129 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:27:01 crc kubenswrapper[4983]: E1125 20:27:01.180791 4983 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.519782 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 20:27:01 crc kubenswrapper[4983]: E1125 20:27:01.521703 4983 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.548043 4983 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.621397 4983 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482" exitCode=0 Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.621575 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.621644 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.623170 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.623253 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.623284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.626192 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.626294 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.626318 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.626341 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.626245 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.627774 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.627810 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.627823 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.629468 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad" exitCode=0 Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.629618 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.629642 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.633762 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.633825 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.633855 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.634910 4983 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6" exitCode=0 Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.635006 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.635131 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.636419 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.636454 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.636467 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.637712 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.638643 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.638686 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.638709 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.639161 4983 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0360b922e7c3b0ebb6d85c3ae9691d12454c01e12a6ed60e5f2082614d9cb522" exitCode=0 Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.639202 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0360b922e7c3b0ebb6d85c3ae9691d12454c01e12a6ed60e5f2082614d9cb522"} Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.639359 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.640434 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.640480 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:01 crc kubenswrapper[4983]: I1125 20:27:01.640495 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.548333 4983 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:02 crc kubenswrapper[4983]: E1125 20:27:02.559024 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.644089 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bf6954a95f865f582d6f8a5d3303a7491c93f45e89134e02385b4ddebb5ed175"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.644238 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.645765 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.645797 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.645806 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.649968 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.650032 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.650044 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.650081 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.651833 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.651901 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.651923 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.655947 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.655999 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.656015 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.660895 4983 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e" exitCode=0 Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.661016 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e"} Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.661065 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.661211 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.664406 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.664600 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.664634 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.665203 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.665248 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.665267 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.781410 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.782970 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.783015 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.783029 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:02 crc kubenswrapper[4983]: I1125 20:27:02.783062 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:27:02 crc kubenswrapper[4983]: E1125 20:27:02.783670 4983 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.173:6443: connect: connection refused" node="crc" Nov 25 20:27:02 crc kubenswrapper[4983]: W1125 20:27:02.895099 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.173:6443: connect: connection refused Nov 25 20:27:02 crc kubenswrapper[4983]: E1125 20:27:02.895213 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.173:6443: connect: connection refused" logger="UnhandledError" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.003178 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.421888 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.669106 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78"} Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.669168 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9"} Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.669285 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.670861 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.670922 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.670945 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673007 4983 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd" exitCode=0 Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673173 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673177 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd"} Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673252 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673219 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673291 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.673399 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675111 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675138 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675541 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675609 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675611 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675631 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675651 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675667 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:03 crc kubenswrapper[4983]: I1125 20:27:03.675675 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.176538 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.578520 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680748 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680757 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680816 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680788 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680734 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967"} Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680946 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564"} Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.680982 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343"} Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.681032 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c"} Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682017 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682038 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682059 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682066 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682052 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682097 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682110 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:04 crc kubenswrapper[4983]: I1125 20:27:04.682070 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.119651 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.690779 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.692738 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.692760 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.692826 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.693139 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a"} Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.693591 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694258 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694283 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694293 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694829 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694865 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694886 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694894 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.694913 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.984155 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.985898 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.985938 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.985951 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:05 crc kubenswrapper[4983]: I1125 20:27:05.985975 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.695763 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.695858 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.695924 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.697857 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.697927 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.697954 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.698030 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.698068 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:06 crc kubenswrapper[4983]: I1125 20:27:06.698096 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:07 crc kubenswrapper[4983]: I1125 20:27:07.578946 4983 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 20:27:07 crc kubenswrapper[4983]: I1125 20:27:07.579103 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 20:27:08 crc kubenswrapper[4983]: I1125 20:27:08.341671 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 20:27:08 crc kubenswrapper[4983]: I1125 20:27:08.341871 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:08 crc kubenswrapper[4983]: I1125 20:27:08.343030 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:08 crc kubenswrapper[4983]: I1125 20:27:08.343060 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:08 crc kubenswrapper[4983]: I1125 20:27:08.343071 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.485802 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.486001 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.487202 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.487232 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.487245 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:09 crc kubenswrapper[4983]: E1125 20:27:09.676087 4983 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.952095 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.952364 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.954058 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.954134 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:09 crc kubenswrapper[4983]: I1125 20:27:09.954158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.131195 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.131514 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.133352 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.133415 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.133436 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.139297 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.707185 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.708218 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.708255 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.708268 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:10 crc kubenswrapper[4983]: I1125 20:27:10.715039 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:11 crc kubenswrapper[4983]: I1125 20:27:11.711259 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:11 crc kubenswrapper[4983]: I1125 20:27:11.712818 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:11 crc kubenswrapper[4983]: I1125 20:27:11.712908 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:11 crc kubenswrapper[4983]: I1125 20:27:11.712938 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:13 crc kubenswrapper[4983]: W1125 20:27:13.195764 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 25 20:27:13 crc kubenswrapper[4983]: I1125 20:27:13.195941 4983 trace.go:236] Trace[1769071445]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 20:27:03.193) (total time: 10002ms): Nov 25 20:27:13 crc kubenswrapper[4983]: Trace[1769071445]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (20:27:13.195) Nov 25 20:27:13 crc kubenswrapper[4983]: Trace[1769071445]: [10.002200322s] [10.002200322s] END Nov 25 20:27:13 crc kubenswrapper[4983]: E1125 20:27:13.195986 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 20:27:13 crc kubenswrapper[4983]: W1125 20:27:13.468736 4983 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 25 20:27:13 crc kubenswrapper[4983]: I1125 20:27:13.468868 4983 trace.go:236] Trace[22024462]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 20:27:03.467) (total time: 10001ms): Nov 25 20:27:13 crc kubenswrapper[4983]: Trace[22024462]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:27:13.468) Nov 25 20:27:13 crc kubenswrapper[4983]: Trace[22024462]: [10.001478793s] [10.001478793s] END Nov 25 20:27:13 crc kubenswrapper[4983]: E1125 20:27:13.468894 4983 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 20:27:13 crc kubenswrapper[4983]: I1125 20:27:13.548200 4983 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 25 20:27:14 crc kubenswrapper[4983]: I1125 20:27:14.050735 4983 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 20:27:14 crc kubenswrapper[4983]: I1125 20:27:14.050798 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 20:27:14 crc kubenswrapper[4983]: I1125 20:27:14.057274 4983 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 20:27:14 crc kubenswrapper[4983]: I1125 20:27:14.057376 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 20:27:14 crc kubenswrapper[4983]: I1125 20:27:14.181853 4983 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]log ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]etcd ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/priority-and-fairness-filter ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-apiextensions-informers ok Nov 25 20:27:14 crc kubenswrapper[4983]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Nov 25 20:27:14 crc kubenswrapper[4983]: [-]poststarthook/crd-informer-synced failed: reason withheld Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-system-namespaces-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 25 20:27:14 crc kubenswrapper[4983]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 25 20:27:14 crc kubenswrapper[4983]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/bootstrap-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/start-kube-aggregator-informers ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-registration-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-discovery-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]autoregister-completion ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-openapi-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 25 20:27:14 crc kubenswrapper[4983]: livez check failed Nov 25 20:27:14 crc kubenswrapper[4983]: I1125 20:27:14.181947 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:27:16 crc kubenswrapper[4983]: I1125 20:27:16.692220 4983 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 20:27:17 crc kubenswrapper[4983]: I1125 20:27:17.579611 4983 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 20:27:17 crc kubenswrapper[4983]: I1125 20:27:17.579671 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.053948 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.056367 4983 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.059697 4983 trace.go:236] Trace[1205021268]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 20:27:04.146) (total time: 14913ms): Nov 25 20:27:19 crc kubenswrapper[4983]: Trace[1205021268]: ---"Objects listed" error: 14912ms (20:27:19.059) Nov 25 20:27:19 crc kubenswrapper[4983]: Trace[1205021268]: [14.913099676s] [14.913099676s] END Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.059751 4983 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.061105 4983 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.061154 4983 trace.go:236] Trace[730652956]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 20:27:07.553) (total time: 11507ms): Nov 25 20:27:19 crc kubenswrapper[4983]: Trace[730652956]: ---"Objects listed" error: 11507ms (20:27:19.060) Nov 25 20:27:19 crc kubenswrapper[4983]: Trace[730652956]: [11.507678201s] [11.507678201s] END Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.061176 4983 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.062693 4983 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.186864 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.187879 4983 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.187989 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.191733 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.398843 4983 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.486420 4983 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.486477 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.534637 4983 apiserver.go:52] "Watching apiserver" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.540115 4983 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.540390 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.540854 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.540934 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.541007 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.541066 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.541091 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.541132 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.541155 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.541196 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.541470 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.542995 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.543673 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.545370 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.545433 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.545893 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.546042 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.546077 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.546455 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.549728 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.553522 4983 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563356 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563401 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563452 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563477 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563506 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563530 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563575 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563598 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563620 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563642 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563664 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563733 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563911 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.563989 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564170 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564188 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564384 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564591 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564417 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564623 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564312 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564682 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564711 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564745 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564772 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564807 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564836 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564867 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564899 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564925 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564953 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.564978 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565007 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565037 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565061 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565087 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565113 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565141 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565169 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565190 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565215 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565243 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565273 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565301 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565326 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565385 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565414 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565439 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565467 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565189 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565380 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565592 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565628 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565657 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565684 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565740 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565771 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565799 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565800 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565835 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565851 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565862 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565915 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565939 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565959 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565979 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.565999 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566029 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566036 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566071 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566091 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566124 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566152 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566182 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566209 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566239 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566272 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566298 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566326 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566352 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566379 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566410 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566437 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566463 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566491 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566517 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566544 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566754 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566781 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566841 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566882 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566911 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566939 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566963 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.566989 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567018 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567024 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567044 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567074 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567101 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567131 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567156 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567184 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567230 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567239 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567256 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567283 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567307 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567334 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567361 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567386 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567412 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567441 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567465 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567496 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567524 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567580 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567605 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567634 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567661 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567687 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567715 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567740 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567764 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567793 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567820 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567848 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567875 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567903 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567929 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567954 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.567980 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568004 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568028 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568052 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568109 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568138 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568167 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568195 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568223 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568251 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568281 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568308 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568336 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568364 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568394 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568424 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568453 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568481 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568522 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568575 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568604 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568631 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568658 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568685 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568712 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568737 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568764 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568792 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568818 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568845 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568874 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568903 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568932 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568959 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.568985 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569013 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569040 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569067 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569094 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569126 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569155 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569181 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569205 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569231 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569257 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569281 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569306 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569331 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569359 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569385 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569413 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569439 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569468 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569495 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569519 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569547 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569638 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569669 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.569697 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.570028 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572049 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.571834 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572097 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572295 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572289 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.576333 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572469 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572638 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572657 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572715 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572939 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.572993 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.573056 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.573228 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.573514 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.573840 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.573861 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.574266 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.574289 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.574625 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.574646 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.574895 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.575164 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.575156 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.576315 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.576504 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.576458 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.576627 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.576806 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:27:20.076782983 +0000 UTC m=+21.189316375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577033 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577116 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577166 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577203 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577234 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577256 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577283 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577309 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577340 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577374 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577412 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577435 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577459 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577499 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577531 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577566 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577593 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577622 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577644 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577669 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577700 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577719 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577806 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577839 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577863 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577896 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577921 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577948 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577974 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578006 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578030 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578054 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578077 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578101 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578124 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578149 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578253 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578267 4983 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578283 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578295 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578306 4983 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578316 4983 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578331 4983 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578342 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578354 4983 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578368 4983 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578379 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578392 4983 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578407 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578424 4983 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578435 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578447 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578458 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578472 4983 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578483 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578494 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578504 4983 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578518 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578531 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578623 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578640 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578653 4983 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578666 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.579241 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577115 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.577408 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578300 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578407 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578664 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.578698 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.579915 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.580105 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.580670 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.580485 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.580926 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.581750 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.582051 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.582755 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.583608 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.583644 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.583744 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.583803 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.584027 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.584114 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.584631 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585332 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585590 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585623 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585748 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585871 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585906 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585919 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.585950 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586117 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586079 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586392 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586406 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.582965 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586720 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586730 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586754 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.586895 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587317 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587321 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587379 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587502 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587724 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587733 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587749 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.587849 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.588043 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.588348 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.582943 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.588379 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.588434 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.588709 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.588776 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589010 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589056 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589246 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589302 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589340 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589600 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589671 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589673 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.589872 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.590128 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.590354 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.590565 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.590812 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.591251 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.591446 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.591643 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.591716 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.591902 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.591914 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.592360 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.592514 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.592545 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.593155 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.593807 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.594133 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.594198 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.594414 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.617112 4983 csr.go:261] certificate signing request csr-t8ntz is approved, waiting to be issued Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.617270 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.617469 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.617772 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.617969 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.618058 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.619230 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.619355 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.619461 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.619607 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.624285 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.625063 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.625685 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.625682 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.625874 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.625952 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.626087 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.626100 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.630117 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.631134 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.631175 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.631500 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.631687 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.631705 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.632056 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.632424 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.633469 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.633657 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.633679 4983 csr.go:257] certificate signing request csr-t8ntz is issued Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.634663 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.634898 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.634921 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.635067 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.635212 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.635509 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.636074 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.636381 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.636701 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.636882 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.637362 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.637449 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.637941 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.638370 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.638419 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.638631 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.638770 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.638909 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.639133 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.639955 4983 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.640252 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.640994 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.641048 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.641770 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.641932 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.642002 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:20.141981905 +0000 UTC m=+21.254515297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.642255 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.642741 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.642817 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.642861 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.642975 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.643185 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.643896 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.644079 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.644795 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:20.14478229 +0000 UTC m=+21.257315682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.644923 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.590886 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.645277 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.645773 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.592493 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.592506 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.645999 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.646291 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.646646 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.646695 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.645413 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.646587 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.648911 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.651798 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.652999 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.660370 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.663746 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.664346 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.664366 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.664377 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.664461 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:20.164441276 +0000 UTC m=+21.276974668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.664963 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.665051 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.665078 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.665090 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.665167 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:20.165127234 +0000 UTC m=+21.277660616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.665780 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.668262 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.668368 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.672007 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.672376 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.674964 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.675904 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679534 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679602 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679687 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679699 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679709 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679721 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679731 4983 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679742 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679752 4983 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679762 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679773 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679783 4983 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679793 4983 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679803 4983 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679812 4983 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679823 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679832 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679841 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679850 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679862 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679872 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679883 4983 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679893 4983 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679902 4983 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679911 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679970 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679982 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.679992 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680003 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680013 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680023 4983 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680034 4983 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680045 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680054 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680094 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680104 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680114 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680124 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680136 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680144 4983 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680153 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680162 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680172 4983 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680181 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680191 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680202 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680212 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680220 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680230 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680240 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680233 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680250 4983 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680315 4983 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680331 4983 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680343 4983 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680345 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680399 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680354 4983 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680426 4983 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680437 4983 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680448 4983 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680458 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680470 4983 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680483 4983 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680493 4983 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680503 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680512 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680521 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680531 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680540 4983 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680564 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680575 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680584 4983 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680594 4983 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680605 4983 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680615 4983 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680624 4983 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680634 4983 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680646 4983 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680656 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680666 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680681 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680690 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680700 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680711 4983 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680721 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680731 4983 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680740 4983 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680754 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680765 4983 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680774 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680784 4983 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680793 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680802 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680847 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680857 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680866 4983 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680875 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680884 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680894 4983 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680903 4983 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680912 4983 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680922 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680931 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680941 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680951 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680960 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680969 4983 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680979 4983 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680989 4983 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.680999 4983 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681008 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681017 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681029 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681040 4983 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681052 4983 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681062 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681073 4983 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681083 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681093 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681105 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681139 4983 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681149 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681157 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681167 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681176 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681185 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681194 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681205 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681215 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681226 4983 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681236 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681245 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681255 4983 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681264 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681576 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681588 4983 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681600 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681610 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681619 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681627 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681635 4983 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681646 4983 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681655 4983 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681664 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681676 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681686 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681695 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681704 4983 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681726 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681735 4983 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681743 4983 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681752 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681761 4983 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681769 4983 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681782 4983 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681791 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681801 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681812 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681821 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681829 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681837 4983 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681846 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681855 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681864 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681873 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681882 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.681891 4983 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.691198 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.695061 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.696363 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.697614 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.698487 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.699998 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.701147 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.703168 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.703384 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.704335 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.705721 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.705940 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.707789 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.708791 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.709520 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.711103 4983 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.711273 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.711444 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.714394 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.715484 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.718277 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.720308 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.720800 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.721749 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.722456 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.723606 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.724306 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.725308 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.725953 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.727001 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.728208 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.728793 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.729760 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.730363 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.730783 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.732007 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.732582 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.733094 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.733855 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.734517 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.734767 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.736202 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9" exitCode=255 Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.736434 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.737243 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.738607 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9"} Nov 25 20:27:19 crc kubenswrapper[4983]: E1125 20:27:19.757972 4983 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.758309 4983 scope.go:117] "RemoveContainer" containerID="63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.777328 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.782996 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.783032 4983 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.783045 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.783056 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.806881 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.817240 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.828258 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.847309 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.853828 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.857753 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.866210 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.869518 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.879587 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.891946 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.907136 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.921496 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.942082 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.959526 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.971816 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.980729 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.993824 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:19 crc kubenswrapper[4983]: I1125 20:27:19.995404 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.003107 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.013419 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.025301 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.047487 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.057053 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.071518 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.084981 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.087461 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.087640 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:27:21.087622284 +0000 UTC m=+22.200155676 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.103200 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.124384 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.142855 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.142955 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rltkm"] Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.143452 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.146598 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.148844 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.148847 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.152852 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.180650 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.188899 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.188981 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40f035b7-d789-469f-976b-bc8b70a1a9b6-hosts-file\") pod \"node-resolver-rltkm\" (UID: \"40f035b7-d789-469f-976b-bc8b70a1a9b6\") " pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.189207 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.189237 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.189275 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.189374 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:21.189353923 +0000 UTC m=+22.301887395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.189856 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjdbn\" (UniqueName: \"kubernetes.io/projected/40f035b7-d789-469f-976b-bc8b70a1a9b6-kube-api-access-qjdbn\") pod \"node-resolver-rltkm\" (UID: \"40f035b7-d789-469f-976b-bc8b70a1a9b6\") " pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.189937 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.189965 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.190014 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190093 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190134 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190153 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190163 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190176 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:21.190160004 +0000 UTC m=+22.302693396 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190218 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:21.190185815 +0000 UTC m=+22.302719277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190249 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.190307 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:21.190295798 +0000 UTC m=+22.302829250 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.195883 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.207646 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.218158 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.231203 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.245161 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.256789 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.267575 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.278886 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.291648 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40f035b7-d789-469f-976b-bc8b70a1a9b6-hosts-file\") pod \"node-resolver-rltkm\" (UID: \"40f035b7-d789-469f-976b-bc8b70a1a9b6\") " pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.291709 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjdbn\" (UniqueName: \"kubernetes.io/projected/40f035b7-d789-469f-976b-bc8b70a1a9b6-kube-api-access-qjdbn\") pod \"node-resolver-rltkm\" (UID: \"40f035b7-d789-469f-976b-bc8b70a1a9b6\") " pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.291861 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40f035b7-d789-469f-976b-bc8b70a1a9b6-hosts-file\") pod \"node-resolver-rltkm\" (UID: \"40f035b7-d789-469f-976b-bc8b70a1a9b6\") " pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.295640 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.306993 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.309782 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjdbn\" (UniqueName: \"kubernetes.io/projected/40f035b7-d789-469f-976b-bc8b70a1a9b6-kube-api-access-qjdbn\") pod \"node-resolver-rltkm\" (UID: \"40f035b7-d789-469f-976b-bc8b70a1a9b6\") " pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.318625 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.328597 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.455373 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rltkm" Nov 25 20:27:20 crc kubenswrapper[4983]: W1125 20:27:20.477769 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40f035b7_d789_469f_976b_bc8b70a1a9b6.slice/crio-caf6320b6d568e233cfa06d92a5eb2dd9faae96f3797f0f037aa991b05b72b07 WatchSource:0}: Error finding container caf6320b6d568e233cfa06d92a5eb2dd9faae96f3797f0f037aa991b05b72b07: Status 404 returned error can't find the container with id caf6320b6d568e233cfa06d92a5eb2dd9faae96f3797f0f037aa991b05b72b07 Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.539770 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fqvg7"] Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.540091 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hn4fk"] Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.540403 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.543604 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.543841 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.550860 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.550977 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.552716 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6fkbz"] Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.553050 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.561832 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562074 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562196 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562314 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562474 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562602 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562704 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562808 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.562962 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.576471 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594243 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-os-release\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594296 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-daemon-config\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594315 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/40e594b9-8aa2-400d-b72e-c36e4523ced3-cni-binary-copy\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594333 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94cdd87a-a76e-46dd-ba54-2584620c32a2-cni-binary-copy\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594361 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-socket-dir-parent\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594380 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-multus-certs\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594397 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94cdd87a-a76e-46dd-ba54-2584620c32a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594430 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/373cf631-46b3-49f3-af97-be8271ce5150-proxy-tls\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594446 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-cni-multus\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594462 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-cnibin\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594479 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-system-cni-dir\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594495 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-k8s-cni-cncf-io\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594510 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-os-release\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594528 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-conf-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594543 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl4vd\" (UniqueName: \"kubernetes.io/projected/94cdd87a-a76e-46dd-ba54-2584620c32a2-kube-api-access-dl4vd\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594578 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-cni-bin\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594596 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-kubelet\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594612 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmxwc\" (UniqueName: \"kubernetes.io/projected/40e594b9-8aa2-400d-b72e-c36e4523ced3-kube-api-access-rmxwc\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594628 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-cni-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594645 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-netns\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594664 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-hostroot\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594681 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8j5s\" (UniqueName: \"kubernetes.io/projected/373cf631-46b3-49f3-af97-be8271ce5150-kube-api-access-m8j5s\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594700 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-etc-kubernetes\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594727 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-system-cni-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594743 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/373cf631-46b3-49f3-af97-be8271ce5150-rootfs\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594758 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/373cf631-46b3-49f3-af97-be8271ce5150-mcd-auth-proxy-config\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594775 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-cnibin\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.594790 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.618160 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.636274 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-25 20:22:19 +0000 UTC, rotation deadline is 2026-08-21 01:01:20.840545989 +0000 UTC Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.636363 4983 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6436h34m0.204188751s for next certificate rotation Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.655879 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.695996 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-os-release\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696051 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-daemon-config\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696074 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/40e594b9-8aa2-400d-b72e-c36e4523ced3-cni-binary-copy\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696093 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94cdd87a-a76e-46dd-ba54-2584620c32a2-cni-binary-copy\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696119 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-socket-dir-parent\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696136 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-multus-certs\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696154 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94cdd87a-a76e-46dd-ba54-2584620c32a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696172 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-cnibin\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696207 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/373cf631-46b3-49f3-af97-be8271ce5150-proxy-tls\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696225 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-cni-multus\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696243 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-system-cni-dir\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696262 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-k8s-cni-cncf-io\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696287 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-os-release\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696306 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-conf-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696325 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl4vd\" (UniqueName: \"kubernetes.io/projected/94cdd87a-a76e-46dd-ba54-2584620c32a2-kube-api-access-dl4vd\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696347 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-cni-bin\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696365 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-kubelet\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696387 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmxwc\" (UniqueName: \"kubernetes.io/projected/40e594b9-8aa2-400d-b72e-c36e4523ced3-kube-api-access-rmxwc\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696410 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8j5s\" (UniqueName: \"kubernetes.io/projected/373cf631-46b3-49f3-af97-be8271ce5150-kube-api-access-m8j5s\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696432 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-cni-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696451 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-netns\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696471 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-hostroot\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696491 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-system-cni-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696518 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-etc-kubernetes\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696544 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/373cf631-46b3-49f3-af97-be8271ce5150-rootfs\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696583 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/373cf631-46b3-49f3-af97-be8271ce5150-mcd-auth-proxy-config\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696602 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-cnibin\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696619 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.696987 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-conf-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697080 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-system-cni-dir\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697095 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-kubelet\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697105 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697114 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-k8s-cni-cncf-io\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697157 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-cni-multus\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697252 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-cnibin\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697254 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-os-release\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697287 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-os-release\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697109 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-netns\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697346 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/373cf631-46b3-49f3-af97-be8271ce5150-rootfs\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697380 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-etc-kubernetes\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697472 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-system-cni-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697298 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-hostroot\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697629 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-cnibin\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697641 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-cni-dir\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697703 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-socket-dir-parent\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697725 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-run-multus-certs\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.697789 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/40e594b9-8aa2-400d-b72e-c36e4523ced3-host-var-lib-cni-bin\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.698267 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/40e594b9-8aa2-400d-b72e-c36e4523ced3-multus-daemon-config\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.698320 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/373cf631-46b3-49f3-af97-be8271ce5150-mcd-auth-proxy-config\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.698360 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94cdd87a-a76e-46dd-ba54-2584620c32a2-cni-binary-copy\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.698472 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94cdd87a-a76e-46dd-ba54-2584620c32a2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.698480 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/40e594b9-8aa2-400d-b72e-c36e4523ced3-cni-binary-copy\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.698523 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94cdd87a-a76e-46dd-ba54-2584620c32a2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.709399 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/373cf631-46b3-49f3-af97-be8271ce5150-proxy-tls\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.719960 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8j5s\" (UniqueName: \"kubernetes.io/projected/373cf631-46b3-49f3-af97-be8271ce5150-kube-api-access-m8j5s\") pod \"machine-config-daemon-fqvg7\" (UID: \"373cf631-46b3-49f3-af97-be8271ce5150\") " pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.724365 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.730536 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl4vd\" (UniqueName: \"kubernetes.io/projected/94cdd87a-a76e-46dd-ba54-2584620c32a2-kube-api-access-dl4vd\") pod \"multus-additional-cni-plugins-hn4fk\" (UID: \"94cdd87a-a76e-46dd-ba54-2584620c32a2\") " pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.742745 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmxwc\" (UniqueName: \"kubernetes.io/projected/40e594b9-8aa2-400d-b72e-c36e4523ced3-kube-api-access-rmxwc\") pod \"multus-6fkbz\" (UID: \"40e594b9-8aa2-400d-b72e-c36e4523ced3\") " pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.743390 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.743442 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.743456 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f68cdffd5ead8e62a1742e1b1a42f49dc1174f6e0b21fe58662c65b45226e0c0"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.744506 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"77e43171a346dd09726e83dfddd6c0493257580d06c1fcca72f5c9cd8d2d4462"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.747847 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.749510 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.750229 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.751672 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.751717 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8e61a6d744222b3cc2323eacf07d64015e44f38d6b9c13060e9995bdaf6fed13"} Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.758335 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.760416 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rltkm" event={"ID":"40f035b7-d789-469f-976b-bc8b70a1a9b6","Type":"ContainerStarted","Data":"caf6320b6d568e233cfa06d92a5eb2dd9faae96f3797f0f037aa991b05b72b07"} Nov 25 20:27:20 crc kubenswrapper[4983]: E1125 20:27:20.774265 4983 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.779634 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.792563 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.810429 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.827675 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.841321 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.851912 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.858272 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.863633 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6fkbz" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.867854 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.888427 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: W1125 20:27:20.893612 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e594b9_8aa2_400d_b72e_c36e4523ced3.slice/crio-6eef013da656c511eddc1201a0e74a51d86fedf1c27d5e983db9a514488629fb WatchSource:0}: Error finding container 6eef013da656c511eddc1201a0e74a51d86fedf1c27d5e983db9a514488629fb: Status 404 returned error can't find the container with id 6eef013da656c511eddc1201a0e74a51d86fedf1c27d5e983db9a514488629fb Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.903999 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.919140 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.935233 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.949427 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.970013 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:20 crc kubenswrapper[4983]: I1125 20:27:20.986981 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.004357 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.031486 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.052831 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4t2p5"] Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.053923 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: W1125 20:27:21.057626 4983 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.057692 4983 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 20:27:21 crc kubenswrapper[4983]: W1125 20:27:21.057780 4983 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.057795 4983 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 20:27:21 crc kubenswrapper[4983]: W1125 20:27:21.057837 4983 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.057856 4983 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 20:27:21 crc kubenswrapper[4983]: W1125 20:27:21.058056 4983 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.058072 4983 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.058626 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.059021 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.063927 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.063928 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101239 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.101470 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:27:23.101439916 +0000 UTC m=+24.213973308 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101545 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-slash\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101580 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-systemd\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101597 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-netd\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101622 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101644 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-systemd-units\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101658 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-ovn\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101675 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-node-log\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101701 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-netns\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101730 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-etc-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101745 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-log-socket\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101781 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101795 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101812 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101848 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mng\" (UniqueName: \"kubernetes.io/projected/b577d7b6-2c09-4ed8-8907-36620b2145b2-kube-api-access-d5mng\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101862 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101875 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-bin\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101896 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-env-overrides\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101931 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-var-lib-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101948 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-kubelet\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.101963 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.130166 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.154202 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.197367 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202845 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-var-lib-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202882 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-kubelet\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202905 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202929 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202954 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-slash\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202974 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-systemd\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.202994 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-netd\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203039 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203057 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-systemd-units\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203051 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-kubelet\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203081 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-ovn\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203109 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-node-log\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203127 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-etc-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203122 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-netd\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203145 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-log-socket\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203159 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203132 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-slash\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203192 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-log-socket\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203209 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-ovn\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203189 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-systemd\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203254 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-etc-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203229 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-node-log\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203271 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203311 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203336 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203359 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203298 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-systemd-units\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203216 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203468 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:23.203431571 +0000 UTC m=+24.315965133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203541 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-netns\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203609 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203634 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203606 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-netns\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203642 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203667 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203701 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203738 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203762 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203785 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-bin\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203804 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mng\" (UniqueName: \"kubernetes.io/projected/b577d7b6-2c09-4ed8-8907-36620b2145b2-kube-api-access-d5mng\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203813 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203825 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203833 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-env-overrides\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203893 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:23.203872663 +0000 UTC m=+24.316406055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203930 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-bin\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203984 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.203997 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.203993 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-var-lib-openvswitch\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.204009 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.204045 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:23.204037798 +0000 UTC m=+24.316571190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.204271 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:23.204203682 +0000 UTC m=+24.316737114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.225660 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.250617 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mng\" (UniqueName: \"kubernetes.io/projected/b577d7b6-2c09-4ed8-8907-36620b2145b2-kube-api-access-d5mng\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.288210 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.329665 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.367183 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.416770 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.451038 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.496453 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.526931 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.566161 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.605051 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.605142 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.605072 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.605218 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.605308 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:21 crc kubenswrapper[4983]: E1125 20:27:21.605593 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.612127 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.612968 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.613715 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.614362 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.614986 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.615207 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.615673 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.616369 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.617305 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.618172 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.618742 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.764389 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerStarted","Data":"a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.764754 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerStarted","Data":"6eef013da656c511eddc1201a0e74a51d86fedf1c27d5e983db9a514488629fb"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.766020 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.766043 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.766055 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"cc758b5db138b17834b7854d7c4aa3533628a612fe8355772fe4359436c07088"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.767031 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rltkm" event={"ID":"40f035b7-d789-469f-976b-bc8b70a1a9b6","Type":"ContainerStarted","Data":"254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.768314 4983 generic.go:334] "Generic (PLEG): container finished" podID="94cdd87a-a76e-46dd-ba54-2584620c32a2" containerID="a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216" exitCode=0 Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.768413 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerDied","Data":"a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.768487 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerStarted","Data":"0971844234f4d0c97da8dfb1e351d64acf543e582aff6c8a67517b42dcf41f51"} Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.779813 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.795926 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.814577 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.848272 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.883767 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.884463 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-env-overrides\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.889929 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.922737 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.945262 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.960086 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:21 crc kubenswrapper[4983]: I1125 20:27:21.987809 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.026993 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.068764 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.107324 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.144772 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.185304 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: E1125 20:27:22.204042 4983 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Nov 25 20:27:22 crc kubenswrapper[4983]: E1125 20:27:22.204073 4983 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 20:27:22 crc kubenswrapper[4983]: E1125 20:27:22.204147 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib podName:b577d7b6-2c09-4ed8-8907-36620b2145b2 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:22.704121583 +0000 UTC m=+23.816654975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib") pod "ovnkube-node-4t2p5" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2") : failed to sync configmap cache: timed out waiting for the condition Nov 25 20:27:22 crc kubenswrapper[4983]: E1125 20:27:22.204168 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config podName:b577d7b6-2c09-4ed8-8907-36620b2145b2 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:22.704159874 +0000 UTC m=+23.816693266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config") pod "ovnkube-node-4t2p5" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2") : failed to sync configmap cache: timed out waiting for the condition Nov 25 20:27:22 crc kubenswrapper[4983]: E1125 20:27:22.204198 4983 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 20:27:22 crc kubenswrapper[4983]: E1125 20:27:22.204222 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert podName:b577d7b6-2c09-4ed8-8907-36620b2145b2 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:22.704217156 +0000 UTC m=+23.816750548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert") pod "ovnkube-node-4t2p5" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2") : failed to sync secret cache: timed out waiting for the condition Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.228877 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.267339 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.277151 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.324207 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.366947 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.407431 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.446215 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.493132 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.525956 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.563186 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.607444 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.620219 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-p4cjj"] Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.620692 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.644924 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.657617 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.677253 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.696528 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.717047 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.719711 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457d14e1-8f39-4341-b294-950c3fc924bf-host\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.719751 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/457d14e1-8f39-4341-b294-950c3fc924bf-serviceca\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.719801 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.719838 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.719864 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.719903 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7zbw\" (UniqueName: \"kubernetes.io/projected/457d14e1-8f39-4341-b294-950c3fc924bf-kube-api-access-j7zbw\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.725974 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.757462 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.761708 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.775793 4983 generic.go:334] "Generic (PLEG): container finished" podID="94cdd87a-a76e-46dd-ba54-2584620c32a2" containerID="ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b" exitCode=0 Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.775863 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerDied","Data":"ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b"} Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.776609 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.778220 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439"} Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.781511 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib\") pod \"ovnkube-node-4t2p5\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.816333 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.820569 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7zbw\" (UniqueName: \"kubernetes.io/projected/457d14e1-8f39-4341-b294-950c3fc924bf-kube-api-access-j7zbw\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.820780 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457d14e1-8f39-4341-b294-950c3fc924bf-host\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.820905 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/457d14e1-8f39-4341-b294-950c3fc924bf-serviceca\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.821092 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/457d14e1-8f39-4341-b294-950c3fc924bf-host\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.822138 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/457d14e1-8f39-4341-b294-950c3fc924bf-serviceca\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.855752 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7zbw\" (UniqueName: \"kubernetes.io/projected/457d14e1-8f39-4341-b294-950c3fc924bf-kube-api-access-j7zbw\") pod \"node-ca-p4cjj\" (UID: \"457d14e1-8f39-4341-b294-950c3fc924bf\") " pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.866135 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.877672 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.908013 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.932683 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p4cjj" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.952183 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:22 crc kubenswrapper[4983]: I1125 20:27:22.991209 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.024579 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.068913 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.104255 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.123536 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.123781 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:27:27.123763999 +0000 UTC m=+28.236297391 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.145994 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.193710 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.224531 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.224608 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.224632 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.224662 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224778 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224784 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224820 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224859 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:27.2248375 +0000 UTC m=+28.337370972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224883 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:27.224871251 +0000 UTC m=+28.337404873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224887 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224901 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224911 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224942 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:27.224929302 +0000 UTC m=+28.337462694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224794 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224958 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.224977 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:27.224971064 +0000 UTC m=+28.337504456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.230175 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.265831 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.305758 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.345394 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.391816 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.604851 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.604905 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.605008 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.604886 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.605121 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:23 crc kubenswrapper[4983]: E1125 20:27:23.605368 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.785801 4983 generic.go:334] "Generic (PLEG): container finished" podID="94cdd87a-a76e-46dd-ba54-2584620c32a2" containerID="77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32" exitCode=0 Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.785927 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerDied","Data":"77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32"} Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.788224 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p4cjj" event={"ID":"457d14e1-8f39-4341-b294-950c3fc924bf","Type":"ContainerStarted","Data":"a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70"} Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.788272 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p4cjj" event={"ID":"457d14e1-8f39-4341-b294-950c3fc924bf","Type":"ContainerStarted","Data":"7c336d235cfb13528f3e1bb673e6de2ae5e96809e0b9b236106f6a68a494fefc"} Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.790359 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" exitCode=0 Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.790433 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.790486 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"1b837278eb882b6560262fba707494e01871ae9342e996f73ab509ed33207838"} Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.821002 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.839071 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.851469 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.876770 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.887684 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.914708 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.938849 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.954949 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.970443 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.985874 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:23 crc kubenswrapper[4983]: I1125 20:27:23.999277 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:23Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.014897 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.030748 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.046463 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.081355 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.095571 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.105522 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.120647 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.145731 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.186540 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.234645 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.267770 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.310026 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.348248 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.386074 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.427189 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.464484 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.504739 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.583935 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.589468 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.595089 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.601991 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.616771 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.646041 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.685843 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.729648 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.771751 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.802036 4983 generic.go:334] "Generic (PLEG): container finished" podID="94cdd87a-a76e-46dd-ba54-2584620c32a2" containerID="9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c" exitCode=0 Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.802087 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerDied","Data":"9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.807152 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.807224 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.807264 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.807291 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.807315 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.807342 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.808718 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.846779 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.892354 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.927326 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:24 crc kubenswrapper[4983]: I1125 20:27:24.978999 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:24Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.010801 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.053973 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.084671 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.131179 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.165106 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.205741 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.246246 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.284585 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.329456 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.367085 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.405913 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.447582 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.462803 4983 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.464754 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.464810 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.464827 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.464999 4983 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.485729 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.538399 4983 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.538700 4983 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.539754 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.539811 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.539823 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.539844 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.539857 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.555300 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.558577 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.558634 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.558648 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.558668 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.558682 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.564631 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.573952 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.577490 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.577526 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.577537 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.577575 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.577587 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.592786 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.601759 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.601835 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.601845 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.601863 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.601876 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.604107 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.604149 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.604227 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.604111 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.604400 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.604740 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.616834 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.618779 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.622503 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.622538 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.622576 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.622598 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.622614 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.642239 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: E1125 20:27:25.642500 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.644954 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.645008 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.645026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.645054 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.645072 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.646853 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.690832 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.729365 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.749427 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.749473 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.749487 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.749505 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.749521 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.819424 4983 generic.go:334] "Generic (PLEG): container finished" podID="94cdd87a-a76e-46dd-ba54-2584620c32a2" containerID="b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c" exitCode=0 Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.819527 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerDied","Data":"b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.835426 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.852948 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.853005 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.853024 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.853050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.853068 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.855303 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.885916 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.901679 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.950001 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.955295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.955322 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.955330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.955345 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.955354 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:25Z","lastTransitionTime":"2025-11-25T20:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:25 crc kubenswrapper[4983]: I1125 20:27:25.972267 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:25Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.007160 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.052763 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.058028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.058089 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.058104 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.058125 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.058139 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.086459 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.128741 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.161086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.161148 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.161167 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.161193 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.161211 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.167034 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.207372 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.250371 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.266472 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.266576 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.266593 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.266618 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.266641 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.290657 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.326380 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.370242 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.370300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.370318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.370344 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.370363 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.473636 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.473702 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.473720 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.473744 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.473762 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.577037 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.577075 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.577085 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.577103 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.577114 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.679524 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.679615 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.679634 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.679660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.679673 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.782397 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.782447 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.782461 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.782484 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.782499 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.826834 4983 generic.go:334] "Generic (PLEG): container finished" podID="94cdd87a-a76e-46dd-ba54-2584620c32a2" containerID="0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368" exitCode=0 Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.826954 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerDied","Data":"0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.834431 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.843472 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.860757 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.885364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.885414 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.885425 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.885445 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.885464 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.888626 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.904915 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.924966 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.939657 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.952662 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.967188 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.983542 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.987295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.987343 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.987356 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.987373 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.987388 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:26Z","lastTransitionTime":"2025-11-25T20:27:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:26 crc kubenswrapper[4983]: I1125 20:27:26.998775 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:26Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.021678 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.036570 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.045845 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.058582 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.067674 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.089465 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.089510 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.089521 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.089536 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.089545 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.168025 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.168196 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:27:35.168168648 +0000 UTC m=+36.280702040 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.191298 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.191327 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.191335 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.191349 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.191360 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.269435 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.269510 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.269534 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.269585 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269694 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269709 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269720 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269764 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:35.269751312 +0000 UTC m=+36.382284694 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269853 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269880 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:35.269874056 +0000 UTC m=+36.382407448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269910 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269928 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:35.269922777 +0000 UTC m=+36.382456169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269966 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269975 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.269983 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.270004 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:35.269997579 +0000 UTC m=+36.382530961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.295118 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.295194 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.295221 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.295253 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.295341 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.398707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.398766 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.398786 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.398812 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.398831 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.502346 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.502384 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.502392 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.502412 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.502422 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.604001 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.604136 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.604358 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.604464 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.604550 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:27 crc kubenswrapper[4983]: E1125 20:27:27.604714 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.605164 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.605251 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.605275 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.605300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.605353 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.708216 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.708257 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.708270 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.708288 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.708301 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.811350 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.811392 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.811400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.811415 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.811424 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.843054 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" event={"ID":"94cdd87a-a76e-46dd-ba54-2584620c32a2","Type":"ContainerStarted","Data":"76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.866780 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.882323 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.912900 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.914504 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.914577 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.914596 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.914619 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.914636 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:27Z","lastTransitionTime":"2025-11-25T20:27:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.932760 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.945717 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.963339 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:27 crc kubenswrapper[4983]: I1125 20:27:27.981728 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.006110 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.017161 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.017240 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.017258 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.017283 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.017301 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.023730 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.045468 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.067455 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.087171 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.103949 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.120303 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.120343 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.120353 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.120389 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.120400 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.122687 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.136069 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:28Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.223297 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.223355 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.223364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.223380 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.223391 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.326602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.326642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.326650 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.326664 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.326673 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.429544 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.429618 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.429630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.429649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.429662 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.532110 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.532158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.532176 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.532199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.532213 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.634826 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.634880 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.634898 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.634920 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.634936 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.737336 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.737375 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.737383 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.737414 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.737422 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.840029 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.840110 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.840135 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.840170 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.840197 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.942910 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.943254 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.943266 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.943284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:28 crc kubenswrapper[4983]: I1125 20:27:28.943295 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:28Z","lastTransitionTime":"2025-11-25T20:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.045654 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.045718 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.045736 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.045759 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.045776 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.149500 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.149608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.149627 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.149652 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.149668 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.251938 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.251976 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.251985 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.251999 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.252010 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.354653 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.354699 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.354710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.354728 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.354738 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.367881 4983 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.457050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.457127 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.457142 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.457172 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.457189 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.560678 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.560768 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.560788 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.561468 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.561536 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.604449 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.604467 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.604504 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:29 crc kubenswrapper[4983]: E1125 20:27:29.604629 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:29 crc kubenswrapper[4983]: E1125 20:27:29.604774 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:29 crc kubenswrapper[4983]: E1125 20:27:29.604968 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.623445 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.641570 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.664101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.664156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.664176 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.664231 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.664254 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.685255 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.710746 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.732088 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.746181 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.761637 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.766244 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.766279 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.766287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.766302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.766312 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.772418 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.782653 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.792480 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.803464 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.812712 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.824636 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.833910 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.852367 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.852876 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.852965 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.857764 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.869520 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.869775 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.869872 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.869980 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.870064 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.874983 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.875157 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.877482 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.889500 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.899701 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.915693 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.926089 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.937220 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.949854 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.966949 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.971999 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.972038 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.972046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.972062 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.972073 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:29Z","lastTransitionTime":"2025-11-25T20:27:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.980312 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:29 crc kubenswrapper[4983]: I1125 20:27:29.990862 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.000899 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.010740 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.023511 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.034188 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.046308 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.063862 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.074620 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.074657 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.074673 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.074693 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.074708 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.075643 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.085296 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.100644 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.114268 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.129621 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.145979 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.169638 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.177049 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.177092 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.177108 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.177129 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.177144 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.187533 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.201459 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.212987 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.225189 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.237297 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.254541 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.268603 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:30Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.279278 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.279318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.279331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.279348 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.279360 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.382140 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.382189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.382201 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.382219 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.382231 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.484837 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.484873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.484886 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.484902 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.484912 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.588201 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.588240 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.588250 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.588265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.588275 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.690051 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.690079 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.690088 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.690102 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.690111 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.792888 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.792994 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.793018 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.793052 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.793076 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.859524 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.895490 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.895542 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.895593 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.895626 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.895648 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.998152 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.998196 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.998213 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.998237 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:30 crc kubenswrapper[4983]: I1125 20:27:30.998255 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:30Z","lastTransitionTime":"2025-11-25T20:27:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.101081 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.101125 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.101137 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.101156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.101170 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.203124 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.204075 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.204160 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.204248 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.204323 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.307105 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.307146 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.307157 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.307179 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.307192 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.409876 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.410102 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.410191 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.410269 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.410345 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.513765 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.513807 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.513817 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.513836 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.513845 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.604488 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.604487 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:31 crc kubenswrapper[4983]: E1125 20:27:31.604854 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.604620 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:31 crc kubenswrapper[4983]: E1125 20:27:31.605170 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:31 crc kubenswrapper[4983]: E1125 20:27:31.605389 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.618171 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.618233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.618252 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.618281 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.618397 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.725418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.725458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.725470 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.725485 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.725499 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.828589 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.828626 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.828636 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.828651 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.828660 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.863288 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.932217 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.932531 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.932710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.932860 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:31 crc kubenswrapper[4983]: I1125 20:27:31.933053 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:31Z","lastTransitionTime":"2025-11-25T20:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.036215 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.036254 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.036264 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.036298 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.036310 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.138596 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.139166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.139177 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.139192 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.139203 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.242662 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.242722 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.242738 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.242761 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.242778 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.345425 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.345529 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.345547 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.345594 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.345612 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.448308 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.448358 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.448368 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.448384 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.448394 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.550474 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.550510 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.550519 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.550536 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.550547 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.652509 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.652678 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.652695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.652720 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.652776 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.756194 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.756270 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.756287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.756315 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.756333 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.859454 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.859521 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.859544 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.859605 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.859629 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.869050 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/0.log" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.873550 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c" exitCode=1 Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.873614 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.874889 4983 scope.go:117] "RemoveContainer" containerID="70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.895720 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:32Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.924847 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:32Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.948806 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:32Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.962924 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.962987 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.963002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.963023 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.963036 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:32Z","lastTransitionTime":"2025-11-25T20:27:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.966599 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:32Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.981067 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:32Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:32 crc kubenswrapper[4983]: I1125 20:27:32.996163 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:32Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.011324 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.028849 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.044844 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.065259 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.065301 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.065313 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.065332 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.065346 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.069589 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.088334 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.107529 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.124009 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.141142 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.160885 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69"] Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.161597 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.163901 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.164843 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.170220 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.170264 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.170290 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.170313 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.170329 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.170893 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.189321 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.204232 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.221884 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.232635 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2tp\" (UniqueName: \"kubernetes.io/projected/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-kube-api-access-5z2tp\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.232876 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.233000 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.233129 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.237687 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.265658 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.272835 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.272879 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.272893 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.272914 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.272927 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.280199 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.310746 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.335123 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.335223 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2tp\" (UniqueName: \"kubernetes.io/projected/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-kube-api-access-5z2tp\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.335343 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.335389 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.336282 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.336660 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.336446 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.345586 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.361164 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.365473 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2tp\" (UniqueName: \"kubernetes.io/projected/f8279fdf-f2c7-4a21-a3de-5ed70023b86c-kube-api-access-5z2tp\") pod \"ovnkube-control-plane-749d76644c-5zg69\" (UID: \"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.376176 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.376222 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.376234 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.376258 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.376272 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.394880 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.410055 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.427667 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.447650 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.464263 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.479057 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.479302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.479488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.479688 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.479833 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.482946 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.485061 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.521786 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.585128 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.585166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.585177 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.585194 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.585205 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.606123 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.606211 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:33 crc kubenswrapper[4983]: E1125 20:27:33.606295 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.606437 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:33 crc kubenswrapper[4983]: E1125 20:27:33.606691 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:33 crc kubenswrapper[4983]: E1125 20:27:33.606789 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.688294 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.688351 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.688365 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.688388 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.688403 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.792096 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.798162 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.798273 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.798363 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.798431 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.879603 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/0.log" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.883045 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.883198 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.886213 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" event={"ID":"f8279fdf-f2c7-4a21-a3de-5ed70023b86c","Type":"ContainerStarted","Data":"b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.886240 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" event={"ID":"f8279fdf-f2c7-4a21-a3de-5ed70023b86c","Type":"ContainerStarted","Data":"054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.886252 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" event={"ID":"f8279fdf-f2c7-4a21-a3de-5ed70023b86c","Type":"ContainerStarted","Data":"03961f48c1eafeda7b06bbab79abe775539d32fe65179a63c91f88ececf5972d"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.901685 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.902804 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.902851 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.902862 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.902881 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.902894 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:33Z","lastTransitionTime":"2025-11-25T20:27:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.921909 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.935869 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.958630 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.973662 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:33 crc kubenswrapper[4983]: I1125 20:27:33.994205 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:33Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.006082 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.006136 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.006147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.006162 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.006172 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.010293 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.024425 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.040154 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.051708 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.066073 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.080426 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.094647 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.109244 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.109292 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.109306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.109325 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.109338 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.115875 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.134587 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.147853 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.159937 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.178125 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.196486 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.207155 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.211123 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.211155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.211164 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.211179 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.211188 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.216344 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.232483 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.241432 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.250859 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.260742 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.276521 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.287835 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.298909 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.311216 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.313172 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.313220 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.313229 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.313242 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.313251 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.321525 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.331441 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.340884 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.416753 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.416798 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.416812 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.416828 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.416840 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.519085 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.519141 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.519158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.519180 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.519198 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.622455 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.622501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.622513 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.622528 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.622540 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.643748 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-59l9r"] Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.644260 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:34 crc kubenswrapper[4983]: E1125 20:27:34.644334 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.661588 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.681437 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.700335 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.715268 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.725166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.725204 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.725218 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.725235 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.725249 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.730269 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.747947 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.748860 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.748956 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj7qw\" (UniqueName: \"kubernetes.io/projected/badc9ffd-b860-4ebb-a59f-044def6963d4-kube-api-access-kj7qw\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.761673 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.778906 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.795395 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.828706 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.828753 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.828767 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.828786 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.828802 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.841993 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.849619 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj7qw\" (UniqueName: \"kubernetes.io/projected/badc9ffd-b860-4ebb-a59f-044def6963d4-kube-api-access-kj7qw\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.849696 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:34 crc kubenswrapper[4983]: E1125 20:27:34.849953 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:34 crc kubenswrapper[4983]: E1125 20:27:34.850055 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:35.350034497 +0000 UTC m=+36.462567909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.868426 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.870736 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj7qw\" (UniqueName: \"kubernetes.io/projected/badc9ffd-b860-4ebb-a59f-044def6963d4-kube-api-access-kj7qw\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.891064 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.893000 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/1.log" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.893760 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/0.log" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.896686 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d" exitCode=1 Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.896733 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.896792 4983 scope.go:117] "RemoveContainer" containerID="70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.897843 4983 scope.go:117] "RemoveContainer" containerID="dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d" Nov 25 20:27:34 crc kubenswrapper[4983]: E1125 20:27:34.898590 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.905308 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.920949 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.931919 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.931945 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.931953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.931966 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.931975 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:34Z","lastTransitionTime":"2025-11-25T20:27:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.935429 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.950292 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.963298 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.972632 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.986760 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:34 crc kubenswrapper[4983]: I1125 20:27:34.997510 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:34Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.014769 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.026031 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.034457 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.034494 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.034503 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.034518 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.034528 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.042936 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.051399 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.060278 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.069841 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.080023 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.089106 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.097217 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.107901 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.119641 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.131645 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.136308 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.136331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.136339 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.136352 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.136360 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.144921 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.156975 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.238930 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.238972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.238980 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.238994 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.239006 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.253387 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.253576 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:27:51.253542839 +0000 UTC m=+52.366076231 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.340904 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.340942 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.340950 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.340967 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.340977 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.354405 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.354446 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.354466 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.354488 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.354506 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354603 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354625 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354632 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354659 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354664 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354683 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354696 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354702 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354689 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:51.354672475 +0000 UTC m=+52.467205867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354636 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354797 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:51.354782688 +0000 UTC m=+52.467316080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354825 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:36.354814058 +0000 UTC m=+37.467347450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354846 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:51.354838849 +0000 UTC m=+52.467372241 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.354870 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:51.35485975 +0000 UTC m=+52.467393242 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.443538 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.443610 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.443633 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.443656 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.443670 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.546526 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.546614 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.546631 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.546655 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.546671 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.604778 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.604803 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.605161 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.604995 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.604799 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.605424 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.649415 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.649451 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.649459 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.649473 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.649483 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.679484 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.679671 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.679760 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.679851 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.680017 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.693183 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.698265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.698324 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.698344 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.698369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.698386 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.711683 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.716771 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.716866 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.716891 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.716923 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.716945 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.736424 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.740800 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.740935 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.741035 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.741137 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.741231 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.754228 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.758178 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.758233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.758256 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.758284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.758304 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.777336 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:35Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:35 crc kubenswrapper[4983]: E1125 20:27:35.777967 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.779624 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.779699 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.779720 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.779750 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.779769 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.883139 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.883176 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.883189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.883206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.883219 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.903198 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/1.log" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.984874 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.984931 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.984948 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.984969 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:35 crc kubenswrapper[4983]: I1125 20:27:35.984986 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:35Z","lastTransitionTime":"2025-11-25T20:27:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.087121 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.087200 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.087222 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.087251 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.087272 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.189636 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.189715 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.189746 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.189775 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.189797 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.292712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.292770 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.292789 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.292817 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.292835 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.363983 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:36 crc kubenswrapper[4983]: E1125 20:27:36.364192 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:36 crc kubenswrapper[4983]: E1125 20:27:36.364367 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:38.364277521 +0000 UTC m=+39.476810953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.396975 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.397035 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.397059 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.397091 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.397114 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.501239 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.501312 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.501330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.501356 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.501375 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.604791 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:36 crc kubenswrapper[4983]: E1125 20:27:36.604991 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.605452 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.605592 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.605618 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.605650 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.605671 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.708944 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.709031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.709051 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.709078 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.709097 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.811768 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.811813 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.811827 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.811856 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.811881 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.913881 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.913957 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.913978 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.914004 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:36 crc kubenswrapper[4983]: I1125 20:27:36.914021 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:36Z","lastTransitionTime":"2025-11-25T20:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.015990 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.016032 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.016043 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.016060 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.016072 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.118655 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.118699 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.118711 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.118728 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.118742 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.220976 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.221017 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.221029 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.221045 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.221058 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.323378 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.323417 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.323429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.323441 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.323451 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.426196 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.426238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.426252 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.426275 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.426292 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.528195 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.528233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.528244 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.528261 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.528273 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.604630 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.604671 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:37 crc kubenswrapper[4983]: E1125 20:27:37.604786 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.604862 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:37 crc kubenswrapper[4983]: E1125 20:27:37.605030 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:37 crc kubenswrapper[4983]: E1125 20:27:37.605144 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.630943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.630982 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.630993 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.631011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.631024 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.732777 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.732821 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.732833 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.732850 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.732863 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.835314 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.835431 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.835455 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.835478 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.835495 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.937628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.937712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.937742 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.937772 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:37 crc kubenswrapper[4983]: I1125 20:27:37.937791 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:37Z","lastTransitionTime":"2025-11-25T20:27:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.039885 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.039948 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.039960 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.039979 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.039991 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.143850 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.143902 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.143916 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.143939 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.143953 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.246702 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.246768 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.246787 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.246822 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.246839 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.348928 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.348965 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.348973 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.348987 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.348996 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.385077 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:38 crc kubenswrapper[4983]: E1125 20:27:38.385230 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:38 crc kubenswrapper[4983]: E1125 20:27:38.385293 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:42.385274491 +0000 UTC m=+43.497807883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.452045 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.452761 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.452828 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.452862 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.452882 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.555189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.555263 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.555287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.555318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.555341 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.604511 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:38 crc kubenswrapper[4983]: E1125 20:27:38.604744 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.662430 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.662479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.662495 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.662536 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.662613 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.766118 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.766188 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.766206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.766233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.766294 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.869227 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.869300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.869312 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.869331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.869341 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.972097 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.972154 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.972166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.972189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:38 crc kubenswrapper[4983]: I1125 20:27:38.972207 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:38Z","lastTransitionTime":"2025-11-25T20:27:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.074654 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.074712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.074767 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.074792 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.074809 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.178013 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.178088 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.178102 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.178124 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.178138 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.281298 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.281341 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.281350 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.281365 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.281376 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.383221 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.383299 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.383317 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.383343 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.383368 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.485589 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.485634 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.485645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.485662 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.485673 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.491163 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.506683 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.521512 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.534107 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.543367 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.555471 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.564443 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.584459 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.588400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.588425 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.588434 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.588450 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.588460 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.609265 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:39 crc kubenswrapper[4983]: E1125 20:27:39.609426 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.609961 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:39 crc kubenswrapper[4983]: E1125 20:27:39.610058 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.610166 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:39 crc kubenswrapper[4983]: E1125 20:27:39.610285 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.642439 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.657136 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.674624 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.688830 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.690639 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.690678 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.690698 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.690722 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.690739 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.702922 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.715796 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.728023 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.738363 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.746659 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.760281 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.789098 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70a8f1667800f19d1d4b7b361c19e0f08c1c3cce3c95cc3e047a38fd96d83c0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:32Z\\\",\\\"message\\\":\\\"om github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 20:27:32.301086 6272 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301143 6272 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 20:27:32.301531 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1125 20:27:32.301610 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 20:27:32.301644 6272 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 20:27:32.301685 6272 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 20:27:32.301694 6272 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 20:27:32.301701 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 20:27:32.301721 6272 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 20:27:32.301723 6272 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 20:27:32.301730 6272 factory.go:656] Stopping watch factory\\\\nI1125 20:27:32.301735 6272 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 20:27:32.301741 6272 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.793140 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.793194 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.793211 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.793239 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.793262 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.806334 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.821015 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.834272 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.846111 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.856469 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.868278 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.882143 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.894444 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.897046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.897109 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.897135 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.897165 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.897189 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:39Z","lastTransitionTime":"2025-11-25T20:27:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.907408 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.919445 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.932397 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.942433 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.964862 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:39 crc kubenswrapper[4983]: I1125 20:27:39.978651 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.000498 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.000587 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.000609 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.000633 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.000650 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.001895 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:39Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.018135 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:40Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.104170 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.104273 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.104338 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.104365 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.104425 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.207755 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.207792 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.207800 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.207813 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.207822 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.310904 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.310983 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.311007 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.311036 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.311063 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.414426 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.414531 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.414575 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.414598 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.414610 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.517165 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.517224 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.517235 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.517251 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.517264 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.604619 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:40 crc kubenswrapper[4983]: E1125 20:27:40.604899 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.620277 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.620326 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.620342 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.620385 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.620402 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.722887 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.722936 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.722947 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.722964 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.722975 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.825065 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.825123 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.825132 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.825147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.825157 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.927501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.927585 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.927602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.927625 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:40 crc kubenswrapper[4983]: I1125 20:27:40.927644 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:40Z","lastTransitionTime":"2025-11-25T20:27:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.030455 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.030531 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.030539 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.030574 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.030586 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.137997 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.138126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.138144 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.138168 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.138185 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.241114 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.241153 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.241162 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.241195 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.241205 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.343733 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.343792 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.343803 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.343820 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.343830 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.446235 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.446290 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.446309 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.446331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.446349 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.549129 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.549207 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.549229 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.549259 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.549280 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.604264 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.604290 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:41 crc kubenswrapper[4983]: E1125 20:27:41.604634 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:41 crc kubenswrapper[4983]: E1125 20:27:41.604799 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.605087 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:41 crc kubenswrapper[4983]: E1125 20:27:41.605252 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.652967 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.653028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.653046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.653069 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.653087 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.756444 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.756547 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.756601 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.756632 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.756655 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.859392 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.859458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.859482 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.859512 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.859534 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.962081 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.962190 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.962210 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.962234 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:41 crc kubenswrapper[4983]: I1125 20:27:41.962253 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:41Z","lastTransitionTime":"2025-11-25T20:27:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.065642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.065695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.065717 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.065765 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.066345 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.169712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.169771 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.169783 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.169802 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.169814 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.271970 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.272031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.272050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.272070 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.272086 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.375637 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.375685 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.375694 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.375708 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.375717 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.430065 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:42 crc kubenswrapper[4983]: E1125 20:27:42.430211 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:42 crc kubenswrapper[4983]: E1125 20:27:42.430274 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:27:50.430257502 +0000 UTC m=+51.542790904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.477981 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.478043 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.478054 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.478092 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.478102 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.581155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.581191 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.581200 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.581214 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.581223 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.604903 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:42 crc kubenswrapper[4983]: E1125 20:27:42.605094 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.684329 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.684395 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.684415 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.684455 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.684473 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.787653 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.787706 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.787723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.787748 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.787766 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.890873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.890934 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.890953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.890980 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.891001 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.994062 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.994100 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.994126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.994141 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:42 crc kubenswrapper[4983]: I1125 20:27:42.994150 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:42Z","lastTransitionTime":"2025-11-25T20:27:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.096905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.096968 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.096987 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.097013 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.097030 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.200660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.200711 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.200729 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.200753 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.200773 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.304319 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.304395 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.304413 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.304441 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.304459 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.407681 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.407732 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.407750 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.407775 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.407792 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.510171 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.510235 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.510737 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.510782 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.510803 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.604183 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:43 crc kubenswrapper[4983]: E1125 20:27:43.604357 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.604445 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:43 crc kubenswrapper[4983]: E1125 20:27:43.604715 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.604890 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:43 crc kubenswrapper[4983]: E1125 20:27:43.604963 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.613741 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.613775 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.613787 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.613801 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.613814 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.716183 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.716249 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.716265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.716287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.716300 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.818985 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.819040 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.819057 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.819081 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.819097 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.922400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.922456 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.922474 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.922501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:43 crc kubenswrapper[4983]: I1125 20:27:43.922525 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:43Z","lastTransitionTime":"2025-11-25T20:27:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.025441 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.025674 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.025757 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.025827 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.025882 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.128443 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.128487 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.128497 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.128513 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.128524 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.231954 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.231997 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.232008 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.232025 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.232038 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.334415 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.334454 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.334462 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.334478 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.334490 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.437376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.437419 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.437431 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.437448 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.437459 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.540531 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.540632 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.540649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.540672 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.540692 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.628743 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:44 crc kubenswrapper[4983]: E1125 20:27:44.629008 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.643499 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.643660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.643733 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.643769 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.643859 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.746418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.746496 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.746522 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.746590 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.746616 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.849469 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.849638 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.849710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.849743 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.849806 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.953305 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.953441 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.953467 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.953494 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:44 crc kubenswrapper[4983]: I1125 20:27:44.953619 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:44Z","lastTransitionTime":"2025-11-25T20:27:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.055814 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.055882 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.055891 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.055925 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.055939 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.160839 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.160878 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.160886 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.160916 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.160927 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.263643 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.263727 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.263747 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.263816 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.263835 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.366851 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.366944 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.366961 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.366989 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.367012 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.470972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.471034 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.471051 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.471076 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.471095 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.574590 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.574659 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.574682 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.574716 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.574735 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.604625 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.604802 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.605334 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.605457 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.605641 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.605740 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.678379 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.678470 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.678498 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.678536 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.678611 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.782857 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.782925 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.782944 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.782973 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.782992 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.849238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.849335 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.849374 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.849410 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.849431 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.876444 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:45Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.882541 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.882641 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.883011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.883095 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.883528 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.908792 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:45Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.913988 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.914147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.914233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.914317 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.914396 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.931157 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:45Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.936551 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.936670 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.936689 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.936722 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.936744 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.954101 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:45Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.959913 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.959953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.959966 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.959986 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.960001 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.979776 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:45Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:45 crc kubenswrapper[4983]: E1125 20:27:45.980005 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.982226 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.982295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.982330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.982370 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:45 crc kubenswrapper[4983]: I1125 20:27:45.982399 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:45Z","lastTransitionTime":"2025-11-25T20:27:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.085331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.085392 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.085409 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.085434 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.085452 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.188637 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.188689 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.188714 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.188750 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.188774 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.291846 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.292069 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.292161 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.292233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.292296 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.395026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.395078 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.395098 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.395117 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.395131 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.497123 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.497157 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.497165 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.497178 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.497186 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.599110 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.599156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.599169 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.599187 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.599200 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.604827 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:46 crc kubenswrapper[4983]: E1125 20:27:46.605027 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.702546 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.703291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.703388 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.703479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.703609 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.806833 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.807142 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.807242 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.807332 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.807427 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.910374 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.910436 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.910461 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.910491 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:46 crc kubenswrapper[4983]: I1125 20:27:46.910514 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:46Z","lastTransitionTime":"2025-11-25T20:27:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.012667 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.012943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.013009 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.013076 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.013143 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.115814 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.116207 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.116424 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.116682 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.116895 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.219309 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.219378 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.219400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.219433 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.219455 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.322456 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.322527 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.322544 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.322608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.322626 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.424837 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.424878 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.424886 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.424899 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.424909 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.527903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.527962 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.527980 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.528005 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.528022 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.604718 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.604757 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:47 crc kubenswrapper[4983]: E1125 20:27:47.604845 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.604716 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:47 crc kubenswrapper[4983]: E1125 20:27:47.605015 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:47 crc kubenswrapper[4983]: E1125 20:27:47.605071 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.606700 4983 scope.go:117] "RemoveContainer" containerID="dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.630312 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.630962 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.631025 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.631046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.631071 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.631089 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.647258 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.665715 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.677190 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.711123 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.733901 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.733940 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.733952 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.733971 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.733982 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.738873 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.755073 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.777002 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.812922 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.830040 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.836506 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.836587 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.836597 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.836613 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.836622 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.843033 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.867700 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.881137 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.904093 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.918127 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.939079 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.939141 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.939156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.939180 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.939199 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:47Z","lastTransitionTime":"2025-11-25T20:27:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.949314 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.950905 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/1.log" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.954755 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec"} Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.954893 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.966845 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:47 crc kubenswrapper[4983]: I1125 20:27:47.997387 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:47Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.016066 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.042161 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.042191 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.042200 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.042217 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.042230 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.042740 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.062986 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.076913 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.099317 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.145716 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.145790 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.145812 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.145840 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.145857 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.149394 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.177683 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.189748 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.212388 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.227795 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.242009 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.248811 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.248883 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.248898 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.248923 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.248938 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.253742 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.266371 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.277861 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.290839 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.305038 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.352631 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.352681 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.352693 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.352711 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.352724 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.457096 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.457791 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.458038 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.458290 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.458493 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.562943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.563021 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.563036 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.563064 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.563080 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.604734 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:48 crc kubenswrapper[4983]: E1125 20:27:48.604878 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.666602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.666651 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.666664 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.666684 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.666695 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.770232 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.770300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.770319 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.770347 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.770367 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.873790 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.873856 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.873866 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.873879 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.873889 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.963125 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/2.log" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.964736 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/1.log" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.970989 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec" exitCode=1 Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.971063 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.971117 4983 scope.go:117] "RemoveContainer" containerID="dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.972290 4983 scope.go:117] "RemoveContainer" containerID="e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec" Nov 25 20:27:48 crc kubenswrapper[4983]: E1125 20:27:48.972540 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.976986 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.977028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.977045 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.977068 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.977109 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:48Z","lastTransitionTime":"2025-11-25T20:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:48 crc kubenswrapper[4983]: I1125 20:27:48.992662 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.016036 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.031344 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.052129 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.070624 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.079735 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.079822 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.079845 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.079887 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.079912 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.095890 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.115348 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.141519 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.158095 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.174918 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.183206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.183264 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.183278 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.183300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.183314 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.192073 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.207591 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.226706 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.240210 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.260928 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.276448 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.286681 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.286854 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.286927 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.286999 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.287074 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.292197 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.390890 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.390971 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.390996 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.391031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.391058 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.495074 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.495527 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.495789 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.496002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.496330 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.600423 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.600507 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.600616 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.600653 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.600675 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.604814 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.604995 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:49 crc kubenswrapper[4983]: E1125 20:27:49.605165 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.605225 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:49 crc kubenswrapper[4983]: E1125 20:27:49.605346 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:49 crc kubenswrapper[4983]: E1125 20:27:49.605523 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.630810 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.654021 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.675373 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.693610 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.703484 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.703628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.703691 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.703760 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.703787 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.717447 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.735992 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.764281 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.787284 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.806958 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.807021 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.807068 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.807102 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.807123 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.824175 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd74613a070269b19eab12417f85cfab6dc1e34ad20ed7ff4f8bed4c57ca3d8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"be-controller-manager-crc\\\\nI1125 20:27:34.033652 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033663 6411 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:27:34.033670 6411 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 20:27:34.033673 6411 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:27:34.033669 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 20:27:34.033679 6411 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:27:34.033682 6411 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 20:27:34.033620 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1125 20:27:34.033683 6411 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1125 20:27:34.033651 6411 services_controller.go:452] Built service openshift-authenticati\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.843416 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.867022 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.885711 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.906396 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.909957 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.910033 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.910060 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.910092 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.910118 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:49Z","lastTransitionTime":"2025-11-25T20:27:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.927455 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.951699 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.973988 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.978837 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/2.log" Nov 25 20:27:49 crc kubenswrapper[4983]: I1125 20:27:49.994823 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:49Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.012966 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.013047 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.013070 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.013101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.013125 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.116518 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.116680 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.116703 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.116737 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.116755 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.219987 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.220512 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.220695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.220899 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.221059 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.324691 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.324755 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.324780 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.324815 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.324838 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.427910 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.427975 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.427992 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.428020 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.428041 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.430536 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:50 crc kubenswrapper[4983]: E1125 20:27:50.430706 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:50 crc kubenswrapper[4983]: E1125 20:27:50.430771 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:28:06.430753945 +0000 UTC m=+67.543287337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.531310 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.531376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.531396 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.531432 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.531453 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.605099 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:50 crc kubenswrapper[4983]: E1125 20:27:50.605362 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.635089 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.635155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.635173 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.635201 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.635223 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.738316 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.738390 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.738408 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.738439 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.738457 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.842773 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.842830 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.842843 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.842866 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:50 crc kubenswrapper[4983]: I1125 20:27:50.842883 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:50.946016 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:50.946070 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:50.946090 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:50.946117 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:50.946137 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:50Z","lastTransitionTime":"2025-11-25T20:27:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.050125 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.050155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.050166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.050182 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.050194 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.152817 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.152874 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.152887 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.152903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.152915 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.256879 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.256926 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.256943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.256972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.256993 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.309442 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.309627 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.309800 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:28:23.309759485 +0000 UTC m=+84.422292897 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.311463 4983 scope.go:117] "RemoveContainer" containerID="e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec" Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.311805 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.329137 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.347922 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.361081 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.361156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.361175 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.361205 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.361224 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.367994 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.394757 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.409138 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.411542 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.411688 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.411732 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.411828 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412018 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412060 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412086 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412164 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:28:23.412138214 +0000 UTC m=+84.524671646 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412081 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412247 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412277 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412176 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412368 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:28:23.412335979 +0000 UTC m=+84.524869411 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412527 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:28:23.412412741 +0000 UTC m=+84.524946153 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412625 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.412757 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:28:23.41272601 +0000 UTC m=+84.525259582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.433160 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.465775 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.473874 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.473939 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.473957 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.473980 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.473994 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.495962 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.515029 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.530472 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.544427 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.562338 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.577479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.577540 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.577579 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.577505 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.579271 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.579315 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.594588 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.604965 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.605175 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.605964 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.606154 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.606242 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:51 crc kubenswrapper[4983]: E1125 20:27:51.606318 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.613385 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.631126 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.652323 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:51Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.682850 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.682905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.682919 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.682943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.682958 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.785645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.785685 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.785695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.785710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.785720 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.888572 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.888642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.888652 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.888672 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.888684 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.991515 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.991576 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.991589 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.991602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:51 crc kubenswrapper[4983]: I1125 20:27:51.991611 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:51Z","lastTransitionTime":"2025-11-25T20:27:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.094790 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.094841 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.094863 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.094885 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.094898 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.168239 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.181196 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.183371 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.193595 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.198022 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.198066 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.198076 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.198096 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.198108 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.210426 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.225894 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.245375 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.258526 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.276043 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.288159 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.298900 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.300345 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.300398 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.300410 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.300429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.300441 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.309876 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.320488 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.331484 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.340962 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.361343 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.382483 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.393354 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.402458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.402641 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.402702 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.402763 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.402818 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.405268 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:52Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.505845 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.505880 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.505888 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.505903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.505912 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.604521 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:52 crc kubenswrapper[4983]: E1125 20:27:52.604679 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.608296 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.608348 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.608370 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.608404 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.608426 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.711388 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.711440 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.711456 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.711479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.711495 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.814718 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.814788 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.814826 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.814849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.814864 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.918259 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.918404 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.918424 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.918496 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:52 crc kubenswrapper[4983]: I1125 20:27:52.918518 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:52Z","lastTransitionTime":"2025-11-25T20:27:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.021241 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.021389 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.021413 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.021447 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.021470 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.124154 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.124206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.124215 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.124233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.124243 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.227405 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.227465 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.227486 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.227511 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.227530 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.329549 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.329654 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.329688 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.329719 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.329744 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.432274 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.432323 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.432334 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.432350 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.432361 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.535023 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.535083 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.535099 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.535120 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.535139 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.604482 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.604584 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:53 crc kubenswrapper[4983]: E1125 20:27:53.604673 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.604497 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:53 crc kubenswrapper[4983]: E1125 20:27:53.604794 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:53 crc kubenswrapper[4983]: E1125 20:27:53.604879 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.637818 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.637860 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.637870 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.637885 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.637896 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.741622 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.741717 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.741735 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.741764 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.741782 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.844492 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.844539 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.844566 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.844581 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.844592 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.947972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.948032 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.948042 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.948055 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:53 crc kubenswrapper[4983]: I1125 20:27:53.948067 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:53Z","lastTransitionTime":"2025-11-25T20:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.050928 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.050956 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.050964 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.050979 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.050987 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.153723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.153788 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.153805 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.153830 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.153847 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.257350 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.257389 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.257398 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.257417 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.257427 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.359627 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.359684 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.359693 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.359710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.359721 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.462488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.462554 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.462572 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.462652 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.462676 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.565254 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.565306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.565318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.565341 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.565359 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.604764 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:54 crc kubenswrapper[4983]: E1125 20:27:54.604943 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.673430 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.673475 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.673485 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.673499 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.673509 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.776268 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.776343 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.776358 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.776382 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.776398 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.879376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.879454 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.879465 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.879485 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.879497 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.981905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.981946 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.981953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.981972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:54 crc kubenswrapper[4983]: I1125 20:27:54.981983 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:54Z","lastTransitionTime":"2025-11-25T20:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.084514 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.084615 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.084635 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.084662 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.084680 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.187325 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.187393 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.187404 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.187418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.187427 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.289470 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.289587 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.289601 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.289619 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.289633 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.391943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.392026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.392049 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.392082 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.392106 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.495306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.495400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.495418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.495446 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.495468 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.599759 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.599846 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.599868 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.599903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.599926 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.604442 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.604442 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:55 crc kubenswrapper[4983]: E1125 20:27:55.604705 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.604804 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:55 crc kubenswrapper[4983]: E1125 20:27:55.604967 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:55 crc kubenswrapper[4983]: E1125 20:27:55.605118 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.704193 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.704306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.704326 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.704394 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.704416 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.809388 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.809466 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.809485 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.809513 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.809536 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.912613 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.912676 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.912694 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.912723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:55 crc kubenswrapper[4983]: I1125 20:27:55.912742 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:55Z","lastTransitionTime":"2025-11-25T20:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.017144 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.017224 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.017241 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.017301 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.017331 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.121721 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.121807 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.121831 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.121867 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.121892 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.225152 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.225263 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.225284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.225318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.225340 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.268873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.268932 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.268944 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.268973 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.268988 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.283345 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:56Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.288819 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.288862 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.288876 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.288898 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.288914 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.304503 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:56Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.309149 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.309184 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.309196 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.309217 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.309230 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.321142 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:56Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.325463 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.325512 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.325531 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.325560 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.325603 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.340657 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:56Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.345419 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.345707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.345746 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.345770 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.345786 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.360817 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:56Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.360979 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.363133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.363164 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.363178 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.363201 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.363215 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.469716 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.469774 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.469786 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.469807 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.469824 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.593502 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.593630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.593658 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.593697 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.593721 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.604409 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:56 crc kubenswrapper[4983]: E1125 20:27:56.604717 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.696126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.696161 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.696173 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.696190 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.696201 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.800034 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.800105 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.800122 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.800152 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.800172 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.903721 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.903796 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.903812 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.903833 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:56 crc kubenswrapper[4983]: I1125 20:27:56.903848 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:56Z","lastTransitionTime":"2025-11-25T20:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.007884 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.007924 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.007934 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.007951 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.007964 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.112297 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.112384 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.112408 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.113049 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.113325 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.217279 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.217312 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.217321 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.217335 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.217344 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.319806 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.319843 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.319852 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.319867 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.319876 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.422323 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.422637 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.422754 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.422849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.423062 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.526065 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.526364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.526471 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.526587 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.526685 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.604516 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:57 crc kubenswrapper[4983]: E1125 20:27:57.605083 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.604886 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:57 crc kubenswrapper[4983]: E1125 20:27:57.605267 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.604596 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:57 crc kubenswrapper[4983]: E1125 20:27:57.605432 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.629028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.629200 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.629303 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.629406 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.629475 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.731750 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.732008 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.732073 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.732167 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.732255 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.834676 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.834733 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.834745 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.834760 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.834769 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.937575 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.937853 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.938047 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.938307 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:57 crc kubenswrapper[4983]: I1125 20:27:57.938511 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:57Z","lastTransitionTime":"2025-11-25T20:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.040873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.041120 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.041231 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.041330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.041423 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.144986 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.145022 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.145031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.145050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.145059 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.247925 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.247981 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.247999 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.248037 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.248056 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.351509 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.351566 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.351606 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.351638 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.351660 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.454887 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.454936 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.454950 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.454976 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.454991 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.557488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.557525 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.557533 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.557546 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.557576 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.604627 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:27:58 crc kubenswrapper[4983]: E1125 20:27:58.604783 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.660547 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.660612 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.660621 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.660637 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.660676 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.764001 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.764062 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.764073 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.764103 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.764115 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.868338 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.868479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.868499 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.868526 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.868543 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.972391 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.972442 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.972453 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.972474 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:58 crc kubenswrapper[4983]: I1125 20:27:58.972488 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:58Z","lastTransitionTime":"2025-11-25T20:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.081514 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.081599 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.081613 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.081632 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.081652 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.184325 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.184390 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.184408 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.184437 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.184457 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.287169 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.287212 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.287221 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.287239 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.287249 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.390887 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.390959 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.390978 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.391005 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.391029 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.494067 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.494149 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.494197 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.494227 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.494244 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.597257 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.597301 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.597312 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.597326 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.597337 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.604898 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:27:59 crc kubenswrapper[4983]: E1125 20:27:59.604996 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.605159 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:27:59 crc kubenswrapper[4983]: E1125 20:27:59.605214 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.605370 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:27:59 crc kubenswrapper[4983]: E1125 20:27:59.605426 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.625060 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.646401 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.662516 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.686389 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.700190 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.700247 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.700264 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.700293 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.700311 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.712037 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.729839 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.748067 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.767005 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.781387 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.803099 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.807195 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.807259 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.807279 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.807303 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.807321 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.816260 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.842450 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.861261 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.880483 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.911478 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.912106 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.912283 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.912443 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.912457 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.912637 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:27:59Z","lastTransitionTime":"2025-11-25T20:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.928268 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.942599 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:27:59 crc kubenswrapper[4983]: I1125 20:27:59.959136 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:59Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.016236 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.016292 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.016304 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.016323 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.016334 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.120133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.120518 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.120534 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.120560 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.120592 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.223963 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.224038 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.224061 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.224092 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.224115 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.326666 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.326716 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.326730 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.326746 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.326756 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.429591 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.429627 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.429638 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.429654 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.429665 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.532093 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.532179 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.532199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.532249 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.532268 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.605021 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:00 crc kubenswrapper[4983]: E1125 20:28:00.605177 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.635497 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.635602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.635621 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.635646 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.635664 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.738316 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.738357 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.738382 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.738398 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.738408 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.842059 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.842151 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.842176 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.842216 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.842243 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.946133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.946220 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.946238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.946264 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:00 crc kubenswrapper[4983]: I1125 20:28:00.946288 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:00Z","lastTransitionTime":"2025-11-25T20:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.049254 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.049341 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.049366 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.049406 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.049433 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.153197 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.153269 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.153287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.153317 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.153342 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.256990 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.257073 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.257093 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.257120 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.257139 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.360563 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.361492 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.361623 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.361666 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.361761 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.468533 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.468673 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.468693 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.468726 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.468747 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.572426 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.572504 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.572527 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.572593 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.572613 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.636621 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.636753 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:01 crc kubenswrapper[4983]: E1125 20:28:01.636879 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:01 crc kubenswrapper[4983]: E1125 20:28:01.637015 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.637097 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:01 crc kubenswrapper[4983]: E1125 20:28:01.637165 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.676811 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.676876 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.676893 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.676916 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.676931 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.782537 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.782867 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.782888 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.782939 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.782966 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.885517 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.885606 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.885812 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.885840 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.885862 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.989403 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.989454 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.989471 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.989496 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:01 crc kubenswrapper[4983]: I1125 20:28:01.989513 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:01Z","lastTransitionTime":"2025-11-25T20:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.092935 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.092986 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.093003 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.093028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.093045 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.196058 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.196112 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.196129 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.196156 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.196174 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.299876 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.299925 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.299941 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.299964 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.299982 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.403953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.404036 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.404058 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.404086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.404106 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.509166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.509220 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.509232 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.509261 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.509280 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.604910 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:02 crc kubenswrapper[4983]: E1125 20:28:02.605153 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.614052 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.614097 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.614112 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.614158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.614172 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.717995 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.718064 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.718088 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.718115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.718134 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.821692 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.821778 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.821796 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.821825 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.821848 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.924280 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.924345 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.924363 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.924389 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:02 crc kubenswrapper[4983]: I1125 20:28:02.924406 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:02Z","lastTransitionTime":"2025-11-25T20:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.027987 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.028107 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.028134 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.028168 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.028195 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.132750 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.132832 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.132849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.132884 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.132911 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.236587 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.236671 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.236691 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.236724 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.236746 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.339846 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.339927 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.339950 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.339981 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.340002 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.443065 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.443144 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.443178 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.443199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.443209 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.546247 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.546320 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.546338 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.546369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.546391 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.604937 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.605096 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:03 crc kubenswrapper[4983]: E1125 20:28:03.605157 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:03 crc kubenswrapper[4983]: E1125 20:28:03.605366 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.605215 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:03 crc kubenswrapper[4983]: E1125 20:28:03.605608 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.648669 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.648725 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.648735 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.648747 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.648758 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.750645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.750694 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.750713 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.750731 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.750744 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.853136 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.853190 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.853203 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.853358 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.853384 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.955878 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.955926 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.956132 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.956153 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:03 crc kubenswrapper[4983]: I1125 20:28:03.956166 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:03Z","lastTransitionTime":"2025-11-25T20:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.058270 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.058369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.058379 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.058423 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.058434 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.161058 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.161113 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.161126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.161147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.161160 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.263585 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.263624 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.263634 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.263649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.263661 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.367253 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.367291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.367302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.367318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.367329 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.470516 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.470581 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.470595 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.470613 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.470624 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.574018 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.574059 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.574073 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.574090 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.574101 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.604760 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:04 crc kubenswrapper[4983]: E1125 20:28:04.604903 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.605587 4983 scope.go:117] "RemoveContainer" containerID="e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec" Nov 25 20:28:04 crc kubenswrapper[4983]: E1125 20:28:04.605748 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.676459 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.676497 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.676509 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.676528 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.676543 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.779596 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.779698 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.779710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.779726 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.779737 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.881633 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.881665 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.881673 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.881685 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.881695 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.983797 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.983824 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.983832 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.983845 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:04 crc kubenswrapper[4983]: I1125 20:28:04.983855 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:04Z","lastTransitionTime":"2025-11-25T20:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.086334 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.086365 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.086375 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.086388 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.086397 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.189330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.189374 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.189386 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.189404 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.189417 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.291414 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.291444 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.291453 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.291467 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.291477 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.396364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.396412 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.396429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.396452 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.396469 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.498923 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.499006 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.499032 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.499072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.499101 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.602756 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.602836 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.602856 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.602890 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.602913 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.604282 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:05 crc kubenswrapper[4983]: E1125 20:28:05.604463 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.605046 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.605089 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:05 crc kubenswrapper[4983]: E1125 20:28:05.605209 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:05 crc kubenswrapper[4983]: E1125 20:28:05.605291 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.706614 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.706657 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.706687 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.706712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.706726 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.808910 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.808967 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.808984 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.809000 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.809008 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.912206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.912269 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.912287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.912314 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:05 crc kubenswrapper[4983]: I1125 20:28:05.912331 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:05Z","lastTransitionTime":"2025-11-25T20:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.015517 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.015581 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.015592 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.015608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.015619 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.118472 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.118544 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.118573 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.118590 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.118600 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.222039 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.222081 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.222090 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.222109 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.222122 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.325001 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.325031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.325040 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.325053 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.325062 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.427996 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.428072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.428091 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.428116 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.428135 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.465946 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.466020 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.466034 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.466064 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.466080 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.482250 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:06Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.487109 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.487163 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.487177 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.487200 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.487215 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.498667 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.499007 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.499164 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:28:38.499127369 +0000 UTC m=+99.611660801 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.502105 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:06Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.506290 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.506444 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.506460 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.506483 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.506500 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.522488 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:06Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.526903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.526985 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.527002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.527026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.527041 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.543116 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:06Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.546190 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.546237 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.546249 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.546271 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.546286 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.559767 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:06Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.559913 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.561211 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.561238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.561246 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.561262 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.561272 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.604496 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:06 crc kubenswrapper[4983]: E1125 20:28:06.604716 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.664458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.664509 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.664521 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.664540 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.664573 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.767550 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.767656 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.767678 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.767708 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.767731 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.870642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.870690 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.870699 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.870718 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.870729 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.973128 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.973202 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.973220 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.973244 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:06 crc kubenswrapper[4983]: I1125 20:28:06.973264 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:06Z","lastTransitionTime":"2025-11-25T20:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.077185 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.077254 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.077268 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.077295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.077312 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.180342 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.180411 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.180429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.180455 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.180474 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.282393 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.282424 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.282433 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.282450 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.282461 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.385056 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.385115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.385129 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.385152 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.385166 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.487789 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.487834 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.487847 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.487863 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.487876 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.590668 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.590715 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.590729 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.590748 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.590761 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.604450 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.604497 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.604469 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:07 crc kubenswrapper[4983]: E1125 20:28:07.604611 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:07 crc kubenswrapper[4983]: E1125 20:28:07.604747 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:07 crc kubenswrapper[4983]: E1125 20:28:07.604823 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.693549 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.693614 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.693627 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.693645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.693656 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.795586 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.795649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.795671 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.795695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.795713 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.899035 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.899124 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.899137 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.899153 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:07 crc kubenswrapper[4983]: I1125 20:28:07.899162 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:07Z","lastTransitionTime":"2025-11-25T20:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.002793 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.002865 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.002876 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.002896 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.002906 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.107849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.107914 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.107924 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.107943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.107955 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.211557 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.211636 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.211656 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.211684 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.211704 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.315271 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.315330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.315340 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.315357 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.315368 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.375152 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/0.log" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.375206 4983 generic.go:334] "Generic (PLEG): container finished" podID="40e594b9-8aa2-400d-b72e-c36e4523ced3" containerID="a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe" exitCode=1 Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.375239 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerDied","Data":"a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.375645 4983 scope.go:117] "RemoveContainer" containerID="a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.403971 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.419072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.419118 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.419128 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.419152 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.419165 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.424891 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.438545 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.456037 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.467880 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.486402 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.499042 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.512602 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.522137 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.522176 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.522189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.522205 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.522217 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.533785 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.546578 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.561144 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.587008 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.604346 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.604371 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: E1125 20:28:08.604519 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.615816 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.624900 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.624940 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.624949 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.624964 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.624974 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.626757 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.638275 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.650417 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.662548 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:08Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.727746 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.727796 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.727806 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.727822 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.727833 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.829262 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.829295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.829305 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.829319 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.829329 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.933306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.933353 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.933363 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.933379 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:08 crc kubenswrapper[4983]: I1125 20:28:08.933406 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:08Z","lastTransitionTime":"2025-11-25T20:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.035764 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.035821 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.035839 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.035858 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.035870 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.137630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.137863 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.137938 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.138009 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.138082 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.240846 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.240956 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.240976 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.241013 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.241038 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.343805 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.344857 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.345067 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.345098 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.345119 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.380168 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/0.log" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.380416 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerStarted","Data":"eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.398589 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.414000 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.443874 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.447772 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.447834 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.447851 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.447875 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.447892 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.462858 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.480100 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.497825 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.514859 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.529176 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.542360 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.550645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.550676 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.550690 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.550707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.550719 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.552922 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.567920 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.581225 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.604737 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.604796 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:09 crc kubenswrapper[4983]: E1125 20:28:09.604872 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:09 crc kubenswrapper[4983]: E1125 20:28:09.604966 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.605140 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:09 crc kubenswrapper[4983]: E1125 20:28:09.605342 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.612515 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.627569 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.645492 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.654079 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.654128 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.654144 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.654168 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.654189 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.660000 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.674625 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.686901 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.715645 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.726661 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.738851 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.748454 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.757438 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.757575 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.757651 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.757722 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.757793 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.761260 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.773899 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.786583 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.800049 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.817487 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.830092 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.860158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.860197 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.860206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.860220 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.860229 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.884469 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.907674 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.924862 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.938414 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.947943 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.960849 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.962055 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.962077 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.962086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.962101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.962112 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:09Z","lastTransitionTime":"2025-11-25T20:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.973940 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:09 crc kubenswrapper[4983]: I1125 20:28:09.986753 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:09Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.065429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.065463 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.065471 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.065485 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.065494 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.167881 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.167920 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.167932 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.167950 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.167960 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.269976 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.270501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.270698 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.270847 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.271012 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.374006 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.374121 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.374132 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.374148 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.374157 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.477046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.477092 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.477105 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.477121 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.477131 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.579871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.579915 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.579927 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.579947 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.579960 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.604459 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:10 crc kubenswrapper[4983]: E1125 20:28:10.604617 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.682695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.682733 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.682742 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.682755 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.682764 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.784824 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.784861 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.784871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.784903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.784932 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.887001 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.887057 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.887066 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.887086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.887096 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.990117 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.990179 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.990194 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.990223 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:10 crc kubenswrapper[4983]: I1125 20:28:10.990239 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:10Z","lastTransitionTime":"2025-11-25T20:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.093155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.093198 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.093226 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.093263 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.093281 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.196047 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.196111 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.196130 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.196159 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.196179 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.298956 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.299034 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.299044 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.299058 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.299069 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.402368 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.402408 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.402418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.402433 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.402442 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.506022 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.506063 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.506072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.506089 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.506100 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.604067 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.604114 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.604096 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:11 crc kubenswrapper[4983]: E1125 20:28:11.604322 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:11 crc kubenswrapper[4983]: E1125 20:28:11.604472 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:11 crc kubenswrapper[4983]: E1125 20:28:11.604628 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.608598 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.608629 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.608639 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.608664 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.608679 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.711617 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.711657 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.711668 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.711684 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.711694 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.814725 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.814770 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.814778 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.814798 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.814807 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.917355 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.917459 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.917477 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.917500 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:11 crc kubenswrapper[4983]: I1125 20:28:11.917515 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:11Z","lastTransitionTime":"2025-11-25T20:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.021378 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.021422 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.021436 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.021453 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.021469 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.125136 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.125210 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.125228 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.125262 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.125289 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.229591 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.229645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.229659 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.229681 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.229696 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.332039 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.332086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.332099 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.332118 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.332131 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.434908 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.434942 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.434957 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.434972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.434986 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.539593 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.539670 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.539691 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.539718 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.539746 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.604019 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:12 crc kubenswrapper[4983]: E1125 20:28:12.604277 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.643002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.643093 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.643122 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.643159 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.643182 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.746215 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.746265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.746281 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.746304 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.746322 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.849302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.849385 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.849404 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.849431 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.849448 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.951794 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.951842 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.951854 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.951871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:12 crc kubenswrapper[4983]: I1125 20:28:12.951883 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:12Z","lastTransitionTime":"2025-11-25T20:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.054532 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.054598 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.054609 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.054627 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.054638 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.157935 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.157982 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.157993 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.158013 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.158025 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.262138 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.262203 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.262226 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.262253 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.262273 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.365241 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.365330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.365351 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.365380 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.365399 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.469121 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.469179 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.469199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.469229 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.469252 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.573494 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.573605 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.573685 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.573719 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.573743 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.608787 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:13 crc kubenswrapper[4983]: E1125 20:28:13.608985 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.609295 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:13 crc kubenswrapper[4983]: E1125 20:28:13.609391 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.609640 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:13 crc kubenswrapper[4983]: E1125 20:28:13.609742 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.679662 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.679738 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.679757 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.679786 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.679805 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.783238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.783313 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.783340 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.783373 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.783395 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.887628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.887685 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.887703 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.887732 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.887751 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.991226 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.991283 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.991302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.991330 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:13 crc kubenswrapper[4983]: I1125 20:28:13.991353 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:13Z","lastTransitionTime":"2025-11-25T20:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.095429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.095479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.095493 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.095516 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.095530 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.199599 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.199677 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.199699 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.199730 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.199750 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.302858 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.302924 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.302945 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.302980 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.303003 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.405741 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.405828 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.405848 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.405878 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.405902 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.508628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.508687 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.508699 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.508712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.508721 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.604854 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:14 crc kubenswrapper[4983]: E1125 20:28:14.604990 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.610463 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.610496 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.610504 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.610534 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.610545 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.712486 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.712584 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.712594 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.712629 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.712640 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.814992 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.815082 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.815101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.815134 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.815160 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.920233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.920319 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.920337 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.920373 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:14 crc kubenswrapper[4983]: I1125 20:28:14.920392 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:14Z","lastTransitionTime":"2025-11-25T20:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.025630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.025688 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.025706 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.025729 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.025748 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.128050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.128108 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.128126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.128151 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.128168 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.230513 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.230574 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.230584 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.230600 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.230610 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.333340 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.333399 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.333415 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.333441 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.333455 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.436259 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.436327 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.436347 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.436379 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.436400 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.539817 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.539884 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.539906 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.539937 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.539960 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.606691 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:15 crc kubenswrapper[4983]: E1125 20:28:15.606855 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.607153 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:15 crc kubenswrapper[4983]: E1125 20:28:15.607248 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.608639 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:15 crc kubenswrapper[4983]: E1125 20:28:15.608745 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.642968 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.643013 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.643026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.643047 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.643064 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.746376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.746460 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.746480 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.746514 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.746536 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.850524 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.850608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.850618 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.850637 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.850647 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.953332 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.953372 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.953383 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.953399 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:15 crc kubenswrapper[4983]: I1125 20:28:15.953409 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:15Z","lastTransitionTime":"2025-11-25T20:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.056413 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.056470 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.056488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.056516 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.056536 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.159719 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.159782 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.159797 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.159817 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.159835 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.262871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.262917 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.262926 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.262939 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.262948 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.364817 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.364890 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.364910 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.364936 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.364954 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.468059 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.468098 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.468106 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.468122 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.468131 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.570483 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.570527 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.570541 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.570578 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.570590 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.604414 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:16 crc kubenswrapper[4983]: E1125 20:28:16.604578 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.673126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.673180 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.673208 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.673225 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.673236 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.783880 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.783957 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.783974 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.784002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.784021 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.886942 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.887005 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.887028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.887057 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.887078 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.900428 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.900474 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.900495 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.900520 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.900542 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: E1125 20:28:16.924790 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:16Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.930360 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.930433 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.930458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.930488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.930508 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: E1125 20:28:16.951946 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:16Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.957878 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.957948 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.957981 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.958011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.958035 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:16 crc kubenswrapper[4983]: E1125 20:28:16.981416 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:16Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.986041 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.986085 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.986102 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.986127 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:16 crc kubenswrapper[4983]: I1125 20:28:16.986145 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:16Z","lastTransitionTime":"2025-11-25T20:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: E1125 20:28:17.006476 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:17Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.017256 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.017306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.017316 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.017333 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.017343 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: E1125 20:28:17.036790 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:17Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:17 crc kubenswrapper[4983]: E1125 20:28:17.037018 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.039098 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.039166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.039192 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.039226 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.039250 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.141740 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.141841 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.141865 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.141893 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.141919 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.244293 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.244366 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.244443 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.244484 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.244506 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.347905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.347949 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.347959 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.347974 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.347983 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.450373 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.450406 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.450413 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.450444 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.450458 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.553923 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.554033 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.554060 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.554095 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.554115 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.604755 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.604785 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.604844 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:17 crc kubenswrapper[4983]: E1125 20:28:17.605048 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:17 crc kubenswrapper[4983]: E1125 20:28:17.605183 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:17 crc kubenswrapper[4983]: E1125 20:28:17.605281 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.657338 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.657398 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.657416 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.657442 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.657459 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.760436 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.760601 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.760629 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.760659 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.760683 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.863621 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.863693 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.863712 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.863740 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.863760 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.966644 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.966725 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.966735 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.966752 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:17 crc kubenswrapper[4983]: I1125 20:28:17.966761 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:17Z","lastTransitionTime":"2025-11-25T20:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.070105 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.070167 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.070189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.070218 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.070239 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.173649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.173701 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.173709 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.173723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.173733 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.276939 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.277020 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.277033 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.277078 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.277091 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.380222 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.380279 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.380289 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.380302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.380312 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.483590 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.483643 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.483658 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.483686 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.483703 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.586276 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.586325 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.586336 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.586352 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.586362 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.603917 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:18 crc kubenswrapper[4983]: E1125 20:28:18.604095 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.689141 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.689200 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.689222 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.689251 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.689273 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.792031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.792101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.792122 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.792150 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.792170 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.894441 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.894539 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.894598 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.894623 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.894640 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.997681 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.997737 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.997757 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.997784 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:18 crc kubenswrapper[4983]: I1125 20:28:18.997805 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:18Z","lastTransitionTime":"2025-11-25T20:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.100477 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.100856 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.100866 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.100883 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.100893 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.204171 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.204238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.204261 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.204288 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.204306 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.307202 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.307253 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.307268 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.307289 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.307303 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.410283 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.410329 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.410343 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.410359 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.410373 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.518362 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.518426 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.518438 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.518453 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.518467 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.604722 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:19 crc kubenswrapper[4983]: E1125 20:28:19.604894 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.605044 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.605623 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:19 crc kubenswrapper[4983]: E1125 20:28:19.605748 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.606050 4983 scope.go:117] "RemoveContainer" containerID="e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec" Nov 25 20:28:19 crc kubenswrapper[4983]: E1125 20:28:19.606045 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.621151 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.621208 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.621225 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.621246 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.621263 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.625919 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.646113 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.678886 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.697266 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.716747 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.723934 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.723988 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.724002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.724023 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.724037 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.738008 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.761958 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.776114 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.790528 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.804037 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.819067 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.825643 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.825672 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.825681 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.825694 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.825703 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.836119 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.849921 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.869752 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.880729 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.891592 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.900965 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.913621 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:19Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.928177 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.928203 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.928211 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.928224 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:19 crc kubenswrapper[4983]: I1125 20:28:19.928236 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:19Z","lastTransitionTime":"2025-11-25T20:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.030106 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.030158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.030171 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.030190 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.030203 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.132118 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.132155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.132163 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.132178 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.132189 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.233925 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.233958 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.233968 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.233982 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.233993 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.335544 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.335648 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.335660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.335675 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.335684 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.420988 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/2.log" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.423158 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.423525 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.435584 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.438241 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.438295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.438308 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.438328 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.438340 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.449818 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.470526 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.485185 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.496782 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.505480 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.518667 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.527941 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.538228 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.540642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.540679 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.540691 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.540707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.540717 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.548985 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.565928 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.575930 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.588087 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.599528 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.604706 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:20 crc kubenswrapper[4983]: E1125 20:28:20.604866 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.610231 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.622677 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.633474 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.641339 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:20Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.642990 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.643028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.643038 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.643053 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.643063 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.744798 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.744843 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.744852 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.744868 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.744876 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.847338 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.847376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.847384 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.847401 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.847409 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.949631 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.949682 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.949693 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.949711 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:20 crc kubenswrapper[4983]: I1125 20:28:20.949723 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:20Z","lastTransitionTime":"2025-11-25T20:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.052282 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.052322 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.052331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.052347 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.052357 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.155085 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.155142 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.155160 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.155184 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.155201 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.258199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.258695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.258745 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.258772 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.258791 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.361707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.361767 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.361791 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.361824 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.361848 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.429123 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/3.log" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.429858 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/2.log" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.433735 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" exitCode=1 Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.433776 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.433818 4983 scope.go:117] "RemoveContainer" containerID="e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.435943 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:28:21 crc kubenswrapper[4983]: E1125 20:28:21.436368 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.463782 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.463837 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.463856 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.463881 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.463899 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.472248 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2bfce31122e18f0dd07a2c865dffa751ff2af176ef7e62af81473439761c8ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:27:48Z\\\",\\\"message\\\":\\\"e openshift-config-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI1125 20:27:48.578713 6621 services_controller.go:445] Built service openshift-config-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF1125 20:27:48.578721 6621 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:27:48Z is after 2025-08-24T17:21:41Z]\\\\nI1125 20:27:48.578726 6621 services_controller.go:451] Built service openshift-config-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-config-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", P\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:20Z\\\",\\\"message\\\":\\\"-config-operator/machine-config-daemon-fqvg7 after 0 failed attempt(s)\\\\nI1125 20:28:20.467654 7036 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 20:28:20.467657 7036 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-fqvg7\\\\nI1125 20:28:20.466463 7036 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:28:20.467672 7036 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:28:20.467681 7036 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:28:20.466389 7036 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1125 20:28:20.467693 7036 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1125 20:28:20.467701 7036 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1125 20:28:20.467703 7036 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.485137 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.501075 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.515833 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.528934 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.538583 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.548966 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.560792 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.566524 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.566732 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.566834 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.566946 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.567050 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.580581 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.590951 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.601967 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.604061 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.604136 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:21 crc kubenswrapper[4983]: E1125 20:28:21.604215 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.604121 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:21 crc kubenswrapper[4983]: E1125 20:28:21.604370 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:21 crc kubenswrapper[4983]: E1125 20:28:21.604421 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.618515 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.628108 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.640662 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.649239 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.669678 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.669898 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.670032 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.670223 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.670388 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.675885 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.687530 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.700045 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:21Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.772375 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.772435 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.772458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.772487 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.772509 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.875134 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.875236 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.875256 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.875283 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.875304 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.978468 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.978549 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.978607 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.978638 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:21 crc kubenswrapper[4983]: I1125 20:28:21.978666 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:21Z","lastTransitionTime":"2025-11-25T20:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.081218 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.081294 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.081313 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.081339 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.081354 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.183408 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.183450 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.183462 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.183501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.183513 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.286988 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.287052 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.287071 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.287169 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.287198 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.389444 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.389468 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.389477 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.389491 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.389500 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.439044 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/3.log" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.443478 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:28:22 crc kubenswrapper[4983]: E1125 20:28:22.443688 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.463997 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.477273 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.488108 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.492284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.492361 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.492379 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.492418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.492436 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.500035 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.515195 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.529069 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.543270 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.561477 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.576884 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.594413 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.595257 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.595291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.595299 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.595313 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.595322 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.604462 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:22 crc kubenswrapper[4983]: E1125 20:28:22.604718 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.608316 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.631775 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.645227 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.676259 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.691681 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.698826 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.698898 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.698925 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.698959 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.699028 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.710203 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.740626 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:20Z\\\",\\\"message\\\":\\\"-config-operator/machine-config-daemon-fqvg7 after 0 failed attempt(s)\\\\nI1125 20:28:20.467654 7036 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 20:28:20.467657 7036 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-fqvg7\\\\nI1125 20:28:20.466463 7036 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:28:20.467672 7036 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:28:20.467681 7036 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:28:20.466389 7036 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1125 20:28:20.467693 7036 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1125 20:28:20.467701 7036 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1125 20:28:20.467703 7036 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.753916 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:22Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.802533 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.802625 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.802647 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.803032 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.803280 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.906231 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.906277 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.906295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.906318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:22 crc kubenswrapper[4983]: I1125 20:28:22.906336 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:22Z","lastTransitionTime":"2025-11-25T20:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.009510 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.010057 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.010085 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.010115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.010135 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.112872 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.112917 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.112929 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.112947 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.112965 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.216148 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.216247 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.216281 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.216310 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.216329 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.318801 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.318891 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.318907 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.318928 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.318945 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.403135 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.403345 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:27.403315985 +0000 UTC m=+148.515849407 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.421735 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.421801 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.421824 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.421859 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.421882 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.504410 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.504504 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.504545 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.504644 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504655 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504745 4983 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504773 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504801 4983 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504830 4983 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504854 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:27.504823335 +0000 UTC m=+148.617356777 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504835 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504924 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:27.504874366 +0000 UTC m=+148.617407808 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504943 4983 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504954 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:27.504938818 +0000 UTC m=+148.617472250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.504966 4983 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.505048 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:27.50502317 +0000 UTC m=+148.617556602 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.524402 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.524445 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.524459 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.524479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.524495 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.604301 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.604300 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.604451 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.604660 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.604800 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:23 crc kubenswrapper[4983]: E1125 20:28:23.604895 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.627476 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.627520 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.627534 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.627550 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.627583 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.730407 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.730447 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.730459 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.730475 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.730487 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.833587 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.833630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.833642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.833658 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.833671 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.937121 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.937189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.937208 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.937232 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:23 crc kubenswrapper[4983]: I1125 20:28:23.937249 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:23Z","lastTransitionTime":"2025-11-25T20:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.039758 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.039810 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.039827 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.039851 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.039868 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.142256 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.142291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.142302 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.142318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.142328 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.245800 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.245862 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.245880 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.245904 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.245921 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.348311 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.348339 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.348347 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.348360 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.348369 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.450402 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.450470 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.450491 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.450514 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.450531 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.552332 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.552366 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.552375 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.552388 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.552396 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.604431 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:24 crc kubenswrapper[4983]: E1125 20:28:24.604637 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.654218 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.654264 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.654279 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.654306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.654321 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.756903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.756963 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.756984 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.757005 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.757021 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.858963 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.859009 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.859023 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.859044 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.859057 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.962178 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.962224 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.962237 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.962254 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:24 crc kubenswrapper[4983]: I1125 20:28:24.962266 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:24Z","lastTransitionTime":"2025-11-25T20:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.064450 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.064482 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.064493 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.064508 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.064518 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.167938 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.167991 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.168007 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.168026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.168041 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.270984 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.271051 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.271072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.271104 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.271128 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.373224 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.373265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.373277 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.373295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.373307 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.475095 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.475135 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.475148 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.475163 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.475172 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.576905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.576964 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.576988 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.577011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.577027 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.604408 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:25 crc kubenswrapper[4983]: E1125 20:28:25.604535 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.604782 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:25 crc kubenswrapper[4983]: E1125 20:28:25.604871 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.605025 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:25 crc kubenswrapper[4983]: E1125 20:28:25.605314 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.679709 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.679756 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.679769 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.679785 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.679798 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.782631 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.782689 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.782706 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.782732 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.782750 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.885089 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.885126 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.885134 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.885147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.885158 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.987602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.987651 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.987660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.987675 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:25 crc kubenswrapper[4983]: I1125 20:28:25.987686 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:25Z","lastTransitionTime":"2025-11-25T20:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.090253 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.090288 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.090299 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.090314 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.090326 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.192926 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.192960 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.192969 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.192981 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.192990 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.294953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.294994 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.295005 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.295020 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.295034 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.397237 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.397306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.397326 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.397351 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.397368 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.499114 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.499145 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.499154 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.499166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.499175 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.601287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.601343 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.601353 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.601368 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.601378 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.604848 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:26 crc kubenswrapper[4983]: E1125 20:28:26.605020 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.704548 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.704616 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.704627 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.704643 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.704653 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.806932 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.806975 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.806984 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.807001 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.807017 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.909102 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.909154 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.909166 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.909186 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:26 crc kubenswrapper[4983]: I1125 20:28:26.909199 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:26Z","lastTransitionTime":"2025-11-25T20:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.011961 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.012001 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.012011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.012025 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.012034 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.114188 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.114222 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.114230 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.114244 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.114253 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.210721 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.210751 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.210760 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.210774 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.210782 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.223824 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.226939 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.226972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.226981 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.226995 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.227004 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.246732 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.250261 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.250300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.250311 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.250331 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.250342 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.265409 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.268873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.268905 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.268914 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.268928 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.268937 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.281329 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.284501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.284524 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.284532 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.284545 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.284600 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.299035 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:27Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.299142 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.301021 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.301060 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.301070 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.301084 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.301093 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.403670 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.403721 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.403731 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.403765 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.403780 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.506498 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.506631 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.506652 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.506679 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.506697 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.604542 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.604610 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.604757 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.604947 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.604989 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:27 crc kubenswrapper[4983]: E1125 20:28:27.605244 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.608870 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.609017 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.609048 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.609085 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.609114 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.712743 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.712825 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.712865 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.712903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.712927 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.817873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.817987 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.818016 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.818059 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.818087 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.920417 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.920458 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.920469 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.920487 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:27 crc kubenswrapper[4983]: I1125 20:28:27.920499 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:27Z","lastTransitionTime":"2025-11-25T20:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.022642 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.022685 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.022697 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.022797 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.022814 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.127761 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.127823 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.127833 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.127846 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.127854 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.230355 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.230413 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.230431 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.230451 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.230462 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.332600 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.332652 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.332670 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.332694 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.332713 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.435672 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.435716 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.435726 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.435742 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.435751 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.538052 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.538089 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.538099 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.538113 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.538122 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.603916 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:28 crc kubenswrapper[4983]: E1125 20:28:28.604230 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.641124 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.641186 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.641205 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.641228 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.641247 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.743628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.743689 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.743707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.743732 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.743750 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.846532 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.846620 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.846628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.846646 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.846655 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.949050 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.949115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.949133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.949158 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:28 crc kubenswrapper[4983]: I1125 20:28:28.949175 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:28Z","lastTransitionTime":"2025-11-25T20:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.051701 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.051766 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.051784 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.051811 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.051833 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.155640 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.155705 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.155722 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.155753 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.155771 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.259370 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.259438 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.259456 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.259483 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.259503 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.362723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.362758 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.362793 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.362809 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.362818 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.466009 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.466079 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.466096 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.466122 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.466139 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.568477 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.568533 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.568550 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.568607 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.568631 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.604532 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:29 crc kubenswrapper[4983]: E1125 20:28:29.604647 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.604745 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.604812 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:29 crc kubenswrapper[4983]: E1125 20:28:29.604973 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:29 crc kubenswrapper[4983]: E1125 20:28:29.605063 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.619151 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-59l9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"badc9ffd-b860-4ebb-a59f-044def6963d4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj7qw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-59l9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.637330 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e409ec05-8a05-432f-ad38-8f7f3591bc3b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"n 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 20:27:13.233322 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1763250980/tls.crt::/tmp/serving-cert-1763250980/tls.key\\\\\\\"\\\\nI1125 20:27:19.048380 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 20:27:19.053918 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 20:27:19.053977 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 20:27:19.054030 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 20:27:19.054943 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 20:27:19.063362 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 20:27:19.063461 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 20:27:19.063506 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1125 20:27:19.063464 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1125 20:27:19.063543 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 20:27:19.063670 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 20:27:19.063711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 20:27:19.063743 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 20:27:19.070456 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF1125 20:27:19.070526 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 20:27:19.072810 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.653653 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d4326860e9815e99c2fbea1e02f3d7eb8a1007976e299d745695ca34d040a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.668355 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.672479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.672519 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.672544 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.672585 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.672601 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.682295 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6caa6264c89f568cc37e5bbbe4ff5a7a6898125607ac73df73aa12b58ee3b439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.699395 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373cf631-46b3-49f3-af97-be8271ce5150\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84bc40d3c987133d89591979927b3b798831b3efeca1fc02cce5e33e8496b0c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8j5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fqvg7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.719861 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.735008 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6fkbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40e594b9-8aa2-400d-b72e-c36e4523ced3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:07Z\\\",\\\"message\\\":\\\"2025-11-25T20:27:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be\\\\n2025-11-25T20:27:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bd15b1b9-877a-4e8f-9b33-013b4a3663be to /host/opt/cni/bin/\\\\n2025-11-25T20:27:22Z [verbose] multus-daemon started\\\\n2025-11-25T20:27:22Z [verbose] Readiness Indicator file check\\\\n2025-11-25T20:28:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:28:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmxwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6fkbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.744353 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p4cjj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"457d14e1-8f39-4341-b294-950c3fc924bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2ca5464a4534b83e886480148c362b7968a349297dd786934d3375ee8d5da70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7zbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p4cjj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.763016 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bac24341-5a0b-4902-bdd8-5be9d117f62c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fa431a322e40a7e2066dffbbc25cf037b0f6a885bb08129016b3a68043f2343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d2721e799d01a005ce9a5ba34c69315634d8028b4ee1d056625aaa7bdc0a564\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4d40cab1d438d3b5e007f1b0442f3d629dbc250d3e5d32e66c69ae676f8967\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9ae7fc2cc64cdfbf4b83c5a23f1a782e4b21f92d733b03f7d794beda43a5b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdfca7061edbb69cfeec275f97d3a8d6186ba741f2be2f906e066b7b7299999c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba046c968ef7c1fad07ef98537d6897813db09d1d535be86d93843f3bea9ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95bc00f3c1001aa5656f7322dfd2d36e8b684d907d954a86034f6ef0f1a9cd8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1010cd2e8816bc03ca9bfdd3425d23a604dd79440a845d551d0a0218134383bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.774108 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5370aa6c-92e7-4447-aa75-b1447ec44715\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d228861842ad79f241e1bb31222ffcc9e1a9f698e036a73a87d6d7d97f51f9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://123617c4db80100b747b7aad700831dc64b324c68688b53a2103be194c9a9933\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c938af7bc233b84d7911804cfc58c11b7bac9fea1cd554210f5ea336512fff54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69a6c48423b2e5f5d6d26df30238b770d2530dde27a947ef38ff17408b459482\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.775639 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.775675 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.775686 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.775702 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.775711 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.785753 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.796923 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rltkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40f035b7-d789-469f-976b-bc8b70a1a9b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254ed2bc0f8f3890efb933a5f7f1abcba5883064ceef23eaf6813b8a97408da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qjdbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rltkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.811039 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94cdd87a-a76e-46dd-ba54-2584620c32a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76c99e72438bbd4be3987d84853db20addf4c5864bd5d52390db4c46dbe9896d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e8ead5724b9a567656549076a88fa4018cbdf3bda52669204ce28f6f50a216\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea29a9df1f53178beebe03f3786bb2abf4f08b99f9a86b230ebd790048d5185b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77d2139f113febb86273483ec183f8de496d4407d04bdfa5f01aa159e27f6f32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d0f2d96732ae9268130a9e19e75e6af579c9f703325c0fbccfd5f0dbbc04d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b86828e13f95529833d80ee0de2c5499136740b77ac712e367fa6ebaf07ad05c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e69df0d99bad66634491df57a6a9606cd1cc61de1e1ef2fe614ec2595301368\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dl4vd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hn4fk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.824173 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92ec798c-ddd9-418f-8bce-87202a5bd9cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:26:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61497445d43e7f6f3627d9257f524ce24563e58aadc3053f65b1bf387269baf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://371536cd44bd8b383af5f9778e60e37005a6e1d4fb4a0697c19716f1a651b15c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://029ccfbeae9e55d5e148a9a526b40bc5e14624f65ff921acd4ea4ef8e222e3ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:26:59Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.835152 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ff4f2af26c6eebbd72e81d5c2c0b6f0c5b97247521e9fe0fcbf7a0476eb05b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef4e03e8cd8bfc9efbd3a6bfc232f7955d8ba778198016907eb2f010627ea01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.854097 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b577d7b6-2c09-4ed8-8907-36620b2145b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:21Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T20:28:20Z\\\",\\\"message\\\":\\\"-config-operator/machine-config-daemon-fqvg7 after 0 failed attempt(s)\\\\nI1125 20:28:20.467654 7036 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 20:28:20.467657 7036 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-fqvg7\\\\nI1125 20:28:20.466463 7036 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1125 20:28:20.467672 7036 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1125 20:28:20.467681 7036 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1125 20:28:20.466389 7036 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1125 20:28:20.467693 7036 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1125 20:28:20.467701 7036 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF1125 20:28:20.467703 7036 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T20:28:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T20:27:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T20:27:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d5mng\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4t2p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.866410 4983 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8279fdf-f2c7-4a21-a3de-5ed70023b86c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T20:27:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://054dd3d06ee2826a0a71bade8b4d75691b19edbe0a8307e274c3966142ac2163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b526a948dad0f0317945be054a5bdeb2c4f54838783edcc90ec36723d480dd13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T20:27:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5z2tp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T20:27:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5zg69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:29Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.878435 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.878474 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.878486 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.878503 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.878514 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.981622 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.981673 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.981682 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.981698 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:29 crc kubenswrapper[4983]: I1125 20:28:29.981712 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:29Z","lastTransitionTime":"2025-11-25T20:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.084409 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.084466 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.084482 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.084501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.084513 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.187182 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.187222 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.187235 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.187257 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.187269 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.290324 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.290367 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.290379 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.290397 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.290409 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.392798 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.392833 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.392843 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.392861 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.392873 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.495429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.495481 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.495498 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.495521 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.495540 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.598344 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.598385 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.598396 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.598413 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.598424 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.604765 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:30 crc kubenswrapper[4983]: E1125 20:28:30.604899 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.618810 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.700655 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.700703 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.700724 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.700747 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.700767 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.802873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.802906 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.802917 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.802933 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.802943 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.905601 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.905665 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.905686 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.905708 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:30 crc kubenswrapper[4983]: I1125 20:28:30.905726 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:30Z","lastTransitionTime":"2025-11-25T20:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.007649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.007683 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.007692 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.007707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.007717 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.110574 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.110623 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.110634 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.110651 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.110662 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.213652 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.213724 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.213758 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.213793 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.213815 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.315968 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.316008 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.316018 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.316053 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.316064 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.418210 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.418258 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.418273 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.418292 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.418304 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.521011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.521071 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.521082 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.521097 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.521107 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.604794 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:31 crc kubenswrapper[4983]: E1125 20:28:31.605017 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.605067 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.605110 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:31 crc kubenswrapper[4983]: E1125 20:28:31.605221 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:31 crc kubenswrapper[4983]: E1125 20:28:31.605354 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.623258 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.623281 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.623289 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.623300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.623309 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.726241 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.726316 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.726341 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.726369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.726390 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.829951 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.830017 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.830035 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.830061 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.830079 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.933251 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.933325 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.933345 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.933400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:31 crc kubenswrapper[4983]: I1125 20:28:31.933426 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:31Z","lastTransitionTime":"2025-11-25T20:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.036339 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.036383 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.036395 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.036414 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.036425 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.139297 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.139364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.139382 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.139411 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.139429 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.241735 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.241784 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.241798 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.241814 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.241827 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.344439 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.344582 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.344595 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.344614 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.344627 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.447155 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.447242 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.447265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.447291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.447310 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.549847 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.549897 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.549911 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.549928 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.549941 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.604330 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:32 crc kubenswrapper[4983]: E1125 20:28:32.604533 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.652348 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.652412 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.652435 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.652463 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.652480 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.756730 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.756811 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.756838 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.756872 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.756895 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.863744 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.863834 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.863860 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.863894 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.863927 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.967000 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.967072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.967099 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.967129 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:32 crc kubenswrapper[4983]: I1125 20:28:32.967153 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:32Z","lastTransitionTime":"2025-11-25T20:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.070655 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.070731 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.070755 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.070784 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.070826 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.174185 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.174263 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.174289 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.174320 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.174345 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.279148 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.279227 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.279248 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.279282 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.279304 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.382364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.382432 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.382444 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.382493 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.382507 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.485808 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.485871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.485888 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.485909 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.485924 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.589131 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.589197 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.589211 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.589232 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.589245 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.604680 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.604689 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.604783 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:33 crc kubenswrapper[4983]: E1125 20:28:33.605601 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:33 crc kubenswrapper[4983]: E1125 20:28:33.605705 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:33 crc kubenswrapper[4983]: E1125 20:28:33.605846 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.607425 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:28:33 crc kubenswrapper[4983]: E1125 20:28:33.607758 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.692667 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.692734 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.692748 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.692767 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.693060 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.796110 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.796186 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.796230 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.796261 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.796285 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.899115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.899167 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.899181 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.899199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:33 crc kubenswrapper[4983]: I1125 20:28:33.899211 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:33Z","lastTransitionTime":"2025-11-25T20:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.001914 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.001979 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.001997 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.002023 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.002042 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.106492 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.106602 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.106628 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.106661 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.106684 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.210680 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.210764 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.210782 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.210805 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.210823 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.314608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.314690 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.314710 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.314741 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.314761 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.418608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.418709 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.418733 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.418771 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.418793 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.522392 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.522460 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.522474 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.522497 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.522515 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.604901 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:34 crc kubenswrapper[4983]: E1125 20:28:34.605066 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.624696 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.624738 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.624750 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.624766 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.624781 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.728479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.729051 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.729067 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.729089 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.729107 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.831543 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.831589 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.831597 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.831609 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.831617 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.933870 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.933904 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.933916 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.933931 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:34 crc kubenswrapper[4983]: I1125 20:28:34.933941 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:34Z","lastTransitionTime":"2025-11-25T20:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.035736 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.035764 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.035773 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.035786 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.035794 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.138868 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.138927 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.138944 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.138967 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.138983 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.242289 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.242328 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.242338 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.242353 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.242362 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.346237 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.346291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.346308 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.346334 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.346352 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.449706 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.449802 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.449825 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.449849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.449904 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.552311 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.552352 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.552361 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.552374 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.552383 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.604878 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.604926 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.604892 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:35 crc kubenswrapper[4983]: E1125 20:28:35.605047 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:35 crc kubenswrapper[4983]: E1125 20:28:35.605134 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:35 crc kubenswrapper[4983]: E1125 20:28:35.605195 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.654906 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.654940 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.654951 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.654969 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.654981 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.757244 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.757274 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.757284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.757301 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.757331 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.859239 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.859280 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.859292 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.859310 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.859322 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.961902 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.961943 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.961956 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.961972 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:35 crc kubenswrapper[4983]: I1125 20:28:35.961983 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:35Z","lastTransitionTime":"2025-11-25T20:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.065010 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.065080 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.065105 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.065137 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.065158 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.168488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.168527 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.168537 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.168575 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.168589 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.271359 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.271431 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.271456 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.271487 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.271509 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.374538 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.374593 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.374606 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.374622 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.374633 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.477041 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.477124 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.477162 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.477194 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.477220 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.579949 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.580012 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.580030 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.580053 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.580070 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.604267 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:36 crc kubenswrapper[4983]: E1125 20:28:36.604516 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.682767 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.682847 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.682871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.682901 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.682924 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.785968 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.786014 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.786031 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.786053 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.786069 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.889001 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.889092 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.889110 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.889134 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.889152 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.992490 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.992914 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.993130 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.993292 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:36 crc kubenswrapper[4983]: I1125 20:28:36.993470 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:36Z","lastTransitionTime":"2025-11-25T20:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.097258 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.097309 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.097328 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.097353 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.097401 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.199950 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.200011 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.200029 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.200053 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.200071 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.302147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.302195 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.302207 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.302227 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.302239 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.405477 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.405584 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.405611 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.405655 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.405679 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.412294 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.412369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.412402 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.412429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.412444 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.426970 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:37Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.432185 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.432221 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.432233 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.432250 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.432261 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.446953 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:37Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.452151 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.452199 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.452210 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.452227 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.452237 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.465602 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:37Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.469334 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.469576 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.469608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.469630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.469862 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.484430 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:37Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.489124 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.489170 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.489184 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.489202 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.489214 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.506782 4983 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T20:28:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f7a9b540-24a4-4342-97be-ae514f2fa363\\\",\\\"systemUUID\\\":\\\"624587ca-b3c3-41fb-b4fb-210ed293ff8f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T20:28:37Z is after 2025-08-24T17:21:41Z" Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.507448 4983 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.508870 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.508903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.508937 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.508953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.508964 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.605285 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.605318 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.605687 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.606170 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.606498 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:37 crc kubenswrapper[4983]: E1125 20:28:37.606762 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.611435 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.611460 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.611469 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.611482 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.611492 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.714948 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.715009 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.715026 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.715049 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.715065 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.817663 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.817724 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.817743 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.817770 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.817788 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.920310 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.920371 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.920391 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.920416 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:37 crc kubenswrapper[4983]: I1125 20:28:37.920434 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:37Z","lastTransitionTime":"2025-11-25T20:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.022757 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.022791 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.022799 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.022812 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.022820 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.125269 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.125316 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.125335 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.125356 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.125368 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.227223 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.227268 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.227280 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.227296 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.227308 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.329967 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.329995 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.330004 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.330015 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.330023 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.432376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.432407 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.432417 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.432430 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.432440 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.534717 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.535041 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.535133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.535227 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.535322 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.570125 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:38 crc kubenswrapper[4983]: E1125 20:28:38.570243 4983 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:28:38 crc kubenswrapper[4983]: E1125 20:28:38.570289 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs podName:badc9ffd-b860-4ebb-a59f-044def6963d4 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:42.570274759 +0000 UTC m=+163.682808151 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs") pod "network-metrics-daemon-59l9r" (UID: "badc9ffd-b860-4ebb-a59f-044def6963d4") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.604678 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:38 crc kubenswrapper[4983]: E1125 20:28:38.604953 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.638466 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.638723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.638932 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.639160 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.639361 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.742115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.742159 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.742173 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.742193 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.742207 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.845994 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.846078 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.846096 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.846123 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.846141 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.949223 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.949395 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.949418 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.949443 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:38 crc kubenswrapper[4983]: I1125 20:28:38.949461 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:38Z","lastTransitionTime":"2025-11-25T20:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.052052 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.052121 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.052133 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.052150 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.052162 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.154366 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.154402 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.154410 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.154423 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.154433 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.256507 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.256593 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.256608 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.256625 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.256635 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.358877 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.358984 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.358992 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.359006 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.359014 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.461851 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.461884 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.461893 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.461907 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.461917 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.564084 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.564153 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.564172 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.564197 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.564214 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.603913 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.604011 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.604058 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:39 crc kubenswrapper[4983]: E1125 20:28:39.604030 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:39 crc kubenswrapper[4983]: E1125 20:28:39.604214 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:39 crc kubenswrapper[4983]: E1125 20:28:39.604331 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.629838 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.629826044 podStartE2EDuration="9.629826044s" podCreationTimestamp="2025-11-25 20:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.629705921 +0000 UTC m=+100.742239323" watchObservedRunningTime="2025-11-25 20:28:39.629826044 +0000 UTC m=+100.742359436" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.666531 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.666591 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.666603 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.666619 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.666630 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.678739 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.678725933 podStartE2EDuration="1m15.678725933s" podCreationTimestamp="2025-11-25 20:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.653984031 +0000 UTC m=+100.766517423" watchObservedRunningTime="2025-11-25 20:28:39.678725933 +0000 UTC m=+100.791259325" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.728894 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5zg69" podStartSLOduration=79.728873133 podStartE2EDuration="1m19.728873133s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.728794041 +0000 UTC m=+100.841327443" watchObservedRunningTime="2025-11-25 20:28:39.728873133 +0000 UTC m=+100.841406525" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.766673 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.766646037 podStartE2EDuration="1m20.766646037s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.749264168 +0000 UTC m=+100.861797570" watchObservedRunningTime="2025-11-25 20:28:39.766646037 +0000 UTC m=+100.879179419" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.768861 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.769002 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.769105 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.769236 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.769449 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.824069 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podStartSLOduration=80.824045117 podStartE2EDuration="1m20.824045117s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.809129438 +0000 UTC m=+100.921662840" watchObservedRunningTime="2025-11-25 20:28:39.824045117 +0000 UTC m=+100.936578519" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.857135 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6fkbz" podStartSLOduration=80.857109015 podStartE2EDuration="1m20.857109015s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.857018943 +0000 UTC m=+100.969552345" watchObservedRunningTime="2025-11-25 20:28:39.857109015 +0000 UTC m=+100.969642447" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.872238 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.872293 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.872307 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.872327 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.872341 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.916248 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=79.916224837 podStartE2EDuration="1m19.916224837s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.898456137 +0000 UTC m=+101.010989559" watchObservedRunningTime="2025-11-25 20:28:39.916224837 +0000 UTC m=+101.028758229" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.930042 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.930027758 podStartE2EDuration="47.930027758s" podCreationTimestamp="2025-11-25 20:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.915878908 +0000 UTC m=+101.028412310" watchObservedRunningTime="2025-11-25 20:28:39.930027758 +0000 UTC m=+101.042561140" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.940475 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rltkm" podStartSLOduration=80.940454836 podStartE2EDuration="1m20.940454836s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.939712448 +0000 UTC m=+101.052245840" watchObservedRunningTime="2025-11-25 20:28:39.940454836 +0000 UTC m=+101.052988228" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.958033 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hn4fk" podStartSLOduration=80.95801271 podStartE2EDuration="1m20.95801271s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.956957174 +0000 UTC m=+101.069490566" watchObservedRunningTime="2025-11-25 20:28:39.95801271 +0000 UTC m=+101.070546102" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.974427 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.974475 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.974486 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.974503 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:39 crc kubenswrapper[4983]: I1125 20:28:39.974515 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:39Z","lastTransitionTime":"2025-11-25T20:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.076656 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.076713 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.076724 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.076740 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.076751 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.179490 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.179540 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.179580 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.179598 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.179611 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.282189 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.282245 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.282262 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.282284 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.282332 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.384625 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.384659 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.384667 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.384679 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.384688 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.486996 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.487067 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.487076 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.487091 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.487100 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.589984 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.590051 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.590071 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.590100 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.590120 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.604496 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:40 crc kubenswrapper[4983]: E1125 20:28:40.604790 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.693659 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.693730 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.693787 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.693819 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.693839 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.796801 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.796842 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.796854 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.796871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.796883 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.899878 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.899916 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.899928 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.899948 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:40 crc kubenswrapper[4983]: I1125 20:28:40.899958 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:40Z","lastTransitionTime":"2025-11-25T20:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.002315 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.002372 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.002392 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.002421 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.002441 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.105269 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.105325 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.105337 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.105355 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.105370 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.208695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.208762 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.208774 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.208796 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.208809 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.311057 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.311101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.311115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.311135 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.311147 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.413658 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.413695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.413706 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.413720 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.413730 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.516168 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.516247 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.516265 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.516292 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.516320 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.604106 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.604142 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:41 crc kubenswrapper[4983]: E1125 20:28:41.604257 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.604310 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:41 crc kubenswrapper[4983]: E1125 20:28:41.604413 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:41 crc kubenswrapper[4983]: E1125 20:28:41.604484 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.618836 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.618871 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.618879 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.618892 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.618901 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.720661 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.720733 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.720751 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.720777 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.720793 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.823801 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.823886 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.823909 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.823940 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.823964 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.926753 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.926841 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.926870 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.926903 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:41 crc kubenswrapper[4983]: I1125 20:28:41.926928 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:41Z","lastTransitionTime":"2025-11-25T20:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.030206 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.030282 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.030306 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.030344 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.030364 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.132959 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.132999 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.133010 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.133025 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.133033 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.235918 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.236072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.236101 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.236129 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.236150 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.339081 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.339123 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.339132 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.339146 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.339155 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.446725 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.446849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.447782 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.447902 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.447931 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.550979 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.551040 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.551058 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.551083 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.551100 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.604195 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:42 crc kubenswrapper[4983]: E1125 20:28:42.604339 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.654535 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.654649 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.654673 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.654704 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.654726 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.758516 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.758614 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.758633 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.758659 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.758677 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.861349 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.861391 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.861422 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.861440 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.861453 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.963637 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.963707 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.963725 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.963777 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:42 crc kubenswrapper[4983]: I1125 20:28:42.963797 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:42Z","lastTransitionTime":"2025-11-25T20:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.066492 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.066528 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.066537 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.066549 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.066573 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.169777 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.169830 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.169848 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.169868 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.169883 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.272385 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.272446 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.272462 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.272483 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.272498 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.374765 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.374815 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.374827 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.374849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.374866 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.477311 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.477347 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.477355 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.477369 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.477376 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.580432 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.580481 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.580498 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.580539 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.580582 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.604072 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:43 crc kubenswrapper[4983]: E1125 20:28:43.604258 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.604539 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:43 crc kubenswrapper[4983]: E1125 20:28:43.604723 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.604945 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:43 crc kubenswrapper[4983]: E1125 20:28:43.605152 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.683241 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.683291 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.683301 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.683318 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.683330 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.785810 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.785873 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.785897 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.785926 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.785947 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.889086 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.889149 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.889169 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.889195 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.889214 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.991961 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.992028 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.992045 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.992071 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:43 crc kubenswrapper[4983]: I1125 20:28:43.992089 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:43Z","lastTransitionTime":"2025-11-25T20:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.094745 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.094804 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.094815 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.094829 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.094839 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.197419 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.197484 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.197507 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.197538 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.197609 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.300364 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.300427 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.300447 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.300472 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.300489 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.403289 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.403353 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.403377 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.403407 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.403429 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.505449 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.505488 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.505498 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.505513 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.505521 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.604742 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:44 crc kubenswrapper[4983]: E1125 20:28:44.604882 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.608301 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.608359 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.608376 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.608400 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.608421 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.710720 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.710768 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.710779 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.710796 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.710805 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.813349 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.813391 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.813409 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.813429 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.813442 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.917231 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.917270 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.917279 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.917293 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:44 crc kubenswrapper[4983]: I1125 20:28:44.917304 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:44Z","lastTransitionTime":"2025-11-25T20:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.020985 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.021036 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.021049 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.021066 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.021077 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.123645 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.123703 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.123728 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.123756 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.123781 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.226935 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.227007 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.227033 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.227063 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.227085 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.330015 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.330072 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.330087 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.330109 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.330122 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.432716 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.432771 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.432785 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.432805 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.432820 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.535613 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.535692 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.535717 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.535752 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.535774 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.604768 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.604828 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:45 crc kubenswrapper[4983]: E1125 20:28:45.604881 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:45 crc kubenswrapper[4983]: E1125 20:28:45.604954 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.605036 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:45 crc kubenswrapper[4983]: E1125 20:28:45.605130 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.638119 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.638143 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.638151 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.638163 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.638172 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.740621 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.740660 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.740671 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.740688 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.740700 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.843953 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.844006 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.844023 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.844046 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.844063 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.947118 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.947191 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.947214 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.947245 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:45 crc kubenswrapper[4983]: I1125 20:28:45.947270 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:45Z","lastTransitionTime":"2025-11-25T20:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.049653 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.049695 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.049704 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.049723 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.049732 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.153243 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.153287 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.153295 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.153310 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.153319 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.255907 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.255955 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.255967 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.255988 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.256000 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.358528 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.358630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.358643 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.358670 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.358684 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.460897 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.460941 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.460951 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.460965 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.460974 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.562611 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.563149 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.563228 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.563300 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.563375 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.605007 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:46 crc kubenswrapper[4983]: E1125 20:28:46.605212 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.666501 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.666599 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.666630 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.666658 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.666677 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.770087 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.770152 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.770162 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.770179 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.770191 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.873195 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.873250 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.873262 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.873281 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.873293 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.975852 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.975927 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.975950 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.975983 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:46 crc kubenswrapper[4983]: I1125 20:28:46.976007 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:46Z","lastTransitionTime":"2025-11-25T20:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.079116 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.079180 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.079208 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.079240 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.079262 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.181594 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.181688 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.181711 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.181749 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.181771 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.283430 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.283479 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.283495 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.283516 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.283533 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.386771 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.386849 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.386867 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.386891 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.386911 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.489440 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.489513 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.489536 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.489615 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.489635 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.592739 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.592777 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.592787 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.592802 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.592812 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.605007 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:47 crc kubenswrapper[4983]: E1125 20:28:47.605201 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.605288 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.605874 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:47 crc kubenswrapper[4983]: E1125 20:28:47.605958 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:47 crc kubenswrapper[4983]: E1125 20:28:47.606108 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.606323 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:28:47 crc kubenswrapper[4983]: E1125 20:28:47.606546 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.695075 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.695115 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.695147 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.695163 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.695174 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.767040 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.767084 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.767093 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.767108 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.767118 4983 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T20:28:47Z","lastTransitionTime":"2025-11-25T20:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.811240 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p4cjj" podStartSLOduration=88.811216956 podStartE2EDuration="1m28.811216956s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:39.965699971 +0000 UTC m=+101.078233363" watchObservedRunningTime="2025-11-25 20:28:47.811216956 +0000 UTC m=+108.923750358" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.811969 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk"] Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.812418 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.815013 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.815329 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.815772 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.816518 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.908811 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43aa07e9-e1db-409a-803f-a5b5c51aff0b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.908856 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/43aa07e9-e1db-409a-803f-a5b5c51aff0b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.908909 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/43aa07e9-e1db-409a-803f-a5b5c51aff0b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.908950 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43aa07e9-e1db-409a-803f-a5b5c51aff0b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:47 crc kubenswrapper[4983]: I1125 20:28:47.908971 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43aa07e9-e1db-409a-803f-a5b5c51aff0b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010215 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/43aa07e9-e1db-409a-803f-a5b5c51aff0b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010305 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43aa07e9-e1db-409a-803f-a5b5c51aff0b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010343 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43aa07e9-e1db-409a-803f-a5b5c51aff0b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010401 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43aa07e9-e1db-409a-803f-a5b5c51aff0b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010405 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/43aa07e9-e1db-409a-803f-a5b5c51aff0b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010447 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/43aa07e9-e1db-409a-803f-a5b5c51aff0b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.010542 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/43aa07e9-e1db-409a-803f-a5b5c51aff0b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.013672 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43aa07e9-e1db-409a-803f-a5b5c51aff0b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.016993 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43aa07e9-e1db-409a-803f-a5b5c51aff0b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.040863 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43aa07e9-e1db-409a-803f-a5b5c51aff0b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-89zvk\" (UID: \"43aa07e9-e1db-409a-803f-a5b5c51aff0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.126661 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.534615 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" event={"ID":"43aa07e9-e1db-409a-803f-a5b5c51aff0b","Type":"ContainerStarted","Data":"db262aed909aea765d849b6140e9b79d9f37ec723a31626061101915b43d9ecb"} Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.534923 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" event={"ID":"43aa07e9-e1db-409a-803f-a5b5c51aff0b","Type":"ContainerStarted","Data":"096465f43db4994711618db44dd0cc71121901c8a0699c09bb6c28400adad951"} Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.552601 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-89zvk" podStartSLOduration=89.552588092 podStartE2EDuration="1m29.552588092s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:28:48.551993988 +0000 UTC m=+109.664527380" watchObservedRunningTime="2025-11-25 20:28:48.552588092 +0000 UTC m=+109.665121484" Nov 25 20:28:48 crc kubenswrapper[4983]: I1125 20:28:48.604287 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:48 crc kubenswrapper[4983]: E1125 20:28:48.604443 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:49 crc kubenswrapper[4983]: I1125 20:28:49.604768 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:49 crc kubenswrapper[4983]: I1125 20:28:49.604786 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:49 crc kubenswrapper[4983]: I1125 20:28:49.604817 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:49 crc kubenswrapper[4983]: E1125 20:28:49.606821 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:49 crc kubenswrapper[4983]: E1125 20:28:49.606970 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:49 crc kubenswrapper[4983]: E1125 20:28:49.607126 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:50 crc kubenswrapper[4983]: I1125 20:28:50.604935 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:50 crc kubenswrapper[4983]: E1125 20:28:50.605276 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:51 crc kubenswrapper[4983]: I1125 20:28:51.604946 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:51 crc kubenswrapper[4983]: I1125 20:28:51.604974 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:51 crc kubenswrapper[4983]: E1125 20:28:51.605448 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:51 crc kubenswrapper[4983]: E1125 20:28:51.605673 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:51 crc kubenswrapper[4983]: I1125 20:28:51.605752 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:51 crc kubenswrapper[4983]: E1125 20:28:51.605926 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:52 crc kubenswrapper[4983]: I1125 20:28:52.604510 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:52 crc kubenswrapper[4983]: E1125 20:28:52.604947 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:53 crc kubenswrapper[4983]: I1125 20:28:53.604936 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:53 crc kubenswrapper[4983]: I1125 20:28:53.605120 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:53 crc kubenswrapper[4983]: I1125 20:28:53.605185 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:53 crc kubenswrapper[4983]: E1125 20:28:53.605269 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:53 crc kubenswrapper[4983]: E1125 20:28:53.605310 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:53 crc kubenswrapper[4983]: E1125 20:28:53.605461 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.560070 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/1.log" Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.560710 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/0.log" Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.560755 4983 generic.go:334] "Generic (PLEG): container finished" podID="40e594b9-8aa2-400d-b72e-c36e4523ced3" containerID="eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118" exitCode=1 Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.560786 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerDied","Data":"eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118"} Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.560835 4983 scope.go:117] "RemoveContainer" containerID="a4fbca1b01edc4b686c0a04bd0c760656e25db3a21f21d5277cb808409f9f3fe" Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.561342 4983 scope.go:117] "RemoveContainer" containerID="eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118" Nov 25 20:28:54 crc kubenswrapper[4983]: E1125 20:28:54.561544 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6fkbz_openshift-multus(40e594b9-8aa2-400d-b72e-c36e4523ced3)\"" pod="openshift-multus/multus-6fkbz" podUID="40e594b9-8aa2-400d-b72e-c36e4523ced3" Nov 25 20:28:54 crc kubenswrapper[4983]: I1125 20:28:54.605509 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:54 crc kubenswrapper[4983]: E1125 20:28:54.605694 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:55 crc kubenswrapper[4983]: I1125 20:28:55.565769 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/1.log" Nov 25 20:28:55 crc kubenswrapper[4983]: I1125 20:28:55.604676 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:55 crc kubenswrapper[4983]: I1125 20:28:55.604775 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:55 crc kubenswrapper[4983]: I1125 20:28:55.604901 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:55 crc kubenswrapper[4983]: E1125 20:28:55.604904 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:55 crc kubenswrapper[4983]: E1125 20:28:55.605062 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:55 crc kubenswrapper[4983]: E1125 20:28:55.605236 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:56 crc kubenswrapper[4983]: I1125 20:28:56.605041 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:56 crc kubenswrapper[4983]: E1125 20:28:56.605805 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:57 crc kubenswrapper[4983]: I1125 20:28:57.604816 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:57 crc kubenswrapper[4983]: I1125 20:28:57.604830 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:57 crc kubenswrapper[4983]: I1125 20:28:57.604902 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:57 crc kubenswrapper[4983]: E1125 20:28:57.605180 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:57 crc kubenswrapper[4983]: E1125 20:28:57.605365 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:57 crc kubenswrapper[4983]: E1125 20:28:57.605516 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:58 crc kubenswrapper[4983]: I1125 20:28:58.604874 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:28:58 crc kubenswrapper[4983]: E1125 20:28:58.605121 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:28:58 crc kubenswrapper[4983]: I1125 20:28:58.606363 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:28:58 crc kubenswrapper[4983]: E1125 20:28:58.606775 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4t2p5_openshift-ovn-kubernetes(b577d7b6-2c09-4ed8-8907-36620b2145b2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" Nov 25 20:28:59 crc kubenswrapper[4983]: E1125 20:28:59.579609 4983 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 20:28:59 crc kubenswrapper[4983]: I1125 20:28:59.604484 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:28:59 crc kubenswrapper[4983]: E1125 20:28:59.605494 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:28:59 crc kubenswrapper[4983]: I1125 20:28:59.605602 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:28:59 crc kubenswrapper[4983]: I1125 20:28:59.605704 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:28:59 crc kubenswrapper[4983]: E1125 20:28:59.605909 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:28:59 crc kubenswrapper[4983]: E1125 20:28:59.606109 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:28:59 crc kubenswrapper[4983]: E1125 20:28:59.698356 4983 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 20:29:00 crc kubenswrapper[4983]: I1125 20:29:00.604797 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:00 crc kubenswrapper[4983]: E1125 20:29:00.605030 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:01 crc kubenswrapper[4983]: I1125 20:29:01.604901 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:01 crc kubenswrapper[4983]: I1125 20:29:01.604947 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:01 crc kubenswrapper[4983]: E1125 20:29:01.605937 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:01 crc kubenswrapper[4983]: I1125 20:29:01.605118 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:01 crc kubenswrapper[4983]: E1125 20:29:01.606118 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:01 crc kubenswrapper[4983]: E1125 20:29:01.606228 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:02 crc kubenswrapper[4983]: I1125 20:29:02.603954 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:02 crc kubenswrapper[4983]: E1125 20:29:02.604104 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:03 crc kubenswrapper[4983]: I1125 20:29:03.604469 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:03 crc kubenswrapper[4983]: I1125 20:29:03.604528 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:03 crc kubenswrapper[4983]: E1125 20:29:03.604689 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:03 crc kubenswrapper[4983]: I1125 20:29:03.604728 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:03 crc kubenswrapper[4983]: E1125 20:29:03.604896 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:03 crc kubenswrapper[4983]: E1125 20:29:03.605032 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:04 crc kubenswrapper[4983]: I1125 20:29:04.604798 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:04 crc kubenswrapper[4983]: E1125 20:29:04.605015 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:04 crc kubenswrapper[4983]: E1125 20:29:04.699585 4983 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 20:29:05 crc kubenswrapper[4983]: I1125 20:29:05.604055 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:05 crc kubenswrapper[4983]: I1125 20:29:05.604185 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:05 crc kubenswrapper[4983]: E1125 20:29:05.604284 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:05 crc kubenswrapper[4983]: I1125 20:29:05.604069 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:05 crc kubenswrapper[4983]: E1125 20:29:05.604390 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:05 crc kubenswrapper[4983]: E1125 20:29:05.604609 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:06 crc kubenswrapper[4983]: I1125 20:29:06.604433 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:06 crc kubenswrapper[4983]: E1125 20:29:06.604742 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:07 crc kubenswrapper[4983]: I1125 20:29:07.604261 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:07 crc kubenswrapper[4983]: I1125 20:29:07.604345 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:07 crc kubenswrapper[4983]: E1125 20:29:07.604509 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:07 crc kubenswrapper[4983]: I1125 20:29:07.604676 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:07 crc kubenswrapper[4983]: E1125 20:29:07.604881 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:07 crc kubenswrapper[4983]: E1125 20:29:07.605028 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:08 crc kubenswrapper[4983]: I1125 20:29:08.604381 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:08 crc kubenswrapper[4983]: E1125 20:29:08.604677 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:09 crc kubenswrapper[4983]: I1125 20:29:09.604694 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:09 crc kubenswrapper[4983]: E1125 20:29:09.605786 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:09 crc kubenswrapper[4983]: I1125 20:29:09.605879 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:09 crc kubenswrapper[4983]: I1125 20:29:09.605956 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:09 crc kubenswrapper[4983]: E1125 20:29:09.606392 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:09 crc kubenswrapper[4983]: E1125 20:29:09.606423 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:09 crc kubenswrapper[4983]: I1125 20:29:09.606486 4983 scope.go:117] "RemoveContainer" containerID="eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118" Nov 25 20:29:09 crc kubenswrapper[4983]: E1125 20:29:09.700898 4983 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 20:29:10 crc kubenswrapper[4983]: I1125 20:29:10.604933 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:10 crc kubenswrapper[4983]: E1125 20:29:10.605350 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:10 crc kubenswrapper[4983]: I1125 20:29:10.622289 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/1.log" Nov 25 20:29:10 crc kubenswrapper[4983]: I1125 20:29:10.622370 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerStarted","Data":"e343e37d5bca4b2b04199dde3cd4ec70dfcf0769bf38fefdbeb42bcbc1e18a4f"} Nov 25 20:29:11 crc kubenswrapper[4983]: I1125 20:29:11.604872 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:11 crc kubenswrapper[4983]: I1125 20:29:11.604891 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:11 crc kubenswrapper[4983]: I1125 20:29:11.605048 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:11 crc kubenswrapper[4983]: E1125 20:29:11.605198 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:11 crc kubenswrapper[4983]: E1125 20:29:11.605459 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:11 crc kubenswrapper[4983]: E1125 20:29:11.605533 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:12 crc kubenswrapper[4983]: I1125 20:29:12.604754 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:12 crc kubenswrapper[4983]: E1125 20:29:12.604929 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:13 crc kubenswrapper[4983]: I1125 20:29:13.604935 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:13 crc kubenswrapper[4983]: I1125 20:29:13.604989 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:13 crc kubenswrapper[4983]: E1125 20:29:13.605179 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:13 crc kubenswrapper[4983]: I1125 20:29:13.605249 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:13 crc kubenswrapper[4983]: E1125 20:29:13.605963 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:13 crc kubenswrapper[4983]: E1125 20:29:13.606110 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:13 crc kubenswrapper[4983]: I1125 20:29:13.606864 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:29:14 crc kubenswrapper[4983]: I1125 20:29:14.576195 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-59l9r"] Nov 25 20:29:14 crc kubenswrapper[4983]: I1125 20:29:14.576315 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:14 crc kubenswrapper[4983]: E1125 20:29:14.576400 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:14 crc kubenswrapper[4983]: I1125 20:29:14.637976 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/3.log" Nov 25 20:29:14 crc kubenswrapper[4983]: I1125 20:29:14.640887 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerStarted","Data":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} Nov 25 20:29:14 crc kubenswrapper[4983]: I1125 20:29:14.641409 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:29:14 crc kubenswrapper[4983]: I1125 20:29:14.668391 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podStartSLOduration=114.66836961 podStartE2EDuration="1m54.66836961s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:14.667722093 +0000 UTC m=+135.780255495" watchObservedRunningTime="2025-11-25 20:29:14.66836961 +0000 UTC m=+135.780903012" Nov 25 20:29:14 crc kubenswrapper[4983]: E1125 20:29:14.702119 4983 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 20:29:15 crc kubenswrapper[4983]: I1125 20:29:15.604659 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:15 crc kubenswrapper[4983]: I1125 20:29:15.604757 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:15 crc kubenswrapper[4983]: I1125 20:29:15.604677 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:15 crc kubenswrapper[4983]: E1125 20:29:15.604986 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:15 crc kubenswrapper[4983]: E1125 20:29:15.605138 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:15 crc kubenswrapper[4983]: E1125 20:29:15.605310 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:16 crc kubenswrapper[4983]: I1125 20:29:16.604989 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:16 crc kubenswrapper[4983]: E1125 20:29:16.605277 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:17 crc kubenswrapper[4983]: I1125 20:29:17.604795 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:17 crc kubenswrapper[4983]: I1125 20:29:17.604950 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:17 crc kubenswrapper[4983]: E1125 20:29:17.605017 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:17 crc kubenswrapper[4983]: I1125 20:29:17.605184 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:17 crc kubenswrapper[4983]: E1125 20:29:17.605671 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:17 crc kubenswrapper[4983]: E1125 20:29:17.605840 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:18 crc kubenswrapper[4983]: I1125 20:29:18.604100 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:18 crc kubenswrapper[4983]: E1125 20:29:18.604361 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-59l9r" podUID="badc9ffd-b860-4ebb-a59f-044def6963d4" Nov 25 20:29:19 crc kubenswrapper[4983]: I1125 20:29:19.604821 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:19 crc kubenswrapper[4983]: I1125 20:29:19.604985 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:19 crc kubenswrapper[4983]: E1125 20:29:19.606429 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 20:29:19 crc kubenswrapper[4983]: I1125 20:29:19.606491 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:19 crc kubenswrapper[4983]: E1125 20:29:19.606696 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 20:29:19 crc kubenswrapper[4983]: E1125 20:29:19.607027 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 20:29:20 crc kubenswrapper[4983]: I1125 20:29:20.604001 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:20 crc kubenswrapper[4983]: I1125 20:29:20.606786 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 20:29:20 crc kubenswrapper[4983]: I1125 20:29:20.608067 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.332280 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.603935 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.604000 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.604137 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.606837 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.607009 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.607026 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 20:29:21 crc kubenswrapper[4983]: I1125 20:29:21.606843 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.490225 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:27 crc kubenswrapper[4983]: E1125 20:29:27.490659 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:31:29.490526914 +0000 UTC m=+270.603060356 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.591730 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.591843 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.591904 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.591995 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.593244 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.604365 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.604772 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.612326 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.638320 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.659058 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 20:29:27 crc kubenswrapper[4983]: I1125 20:29:27.675968 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.700330 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7b0d804c42efdfa333f7d3e4a437d96edf30341bf063394163cc07c8d0e8ae4a"} Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.701621 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5ad959b60c6012d8bf7044c22b2ae34e66e87c04a0464691bc4bbf7093f58d8f"} Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.704062 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"71ec30ed1ae3ffd35d0001a1731dc38b33570e4ac8b64ff0074ad98c56f2df55"} Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.704247 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"14941e60df53b667b8f7b53a9dba402c48d972005774734b2a6e42057c76eb5e"} Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.705311 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"91bc06e4d9fbcfb69507dbf0b55d8649969b8bd6d7d8589688ceef27a68b5d47"} Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.705385 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"85d29fe0526d9d787a2f7ed8527075c98fdd6ffa550e792c71e89c86f0c14582"} Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.705569 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.824647 4983 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.868925 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6xxlr"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.869436 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6x4tb"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.869693 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.869876 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.870030 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.871052 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.909070 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-etcd-client\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.909120 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-audit-policies\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.914094 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.915032 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.915091 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.915208 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.916370 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.917616 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.917740 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.917789 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.918160 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.918276 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.924301 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.924792 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.925415 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.928860 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.929170 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.933448 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.938956 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.938973 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.939019 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.939643 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.939819 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.939840 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.940340 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9zs6k"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.940730 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.942074 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.942080 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.942966 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.947341 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.947348 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.947672 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.947686 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.947707 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.947684 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.948187 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.950482 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lbln6"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.956793 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.957292 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.957424 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.957785 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.958156 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.968357 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.968595 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.968624 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.968654 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.968595 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.969192 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.969300 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.969542 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.969646 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.970549 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.971189 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.971452 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c94zn"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.971949 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.972278 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.972388 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.972476 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.972535 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.973780 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.973906 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g8bfq"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.974019 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.974179 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.974697 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.975114 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.975480 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zwnb"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.975617 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.976025 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.978761 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-b2krm"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.979188 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.979835 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.980224 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.981797 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ztngk"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.982301 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.983077 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67"] Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.983478 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.993255 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.993502 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.993681 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994012 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994127 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994236 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994406 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994519 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994665 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994775 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994889 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.994997 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.995100 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.995207 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.995433 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.995537 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.995662 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.995824 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.996005 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.996115 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.997024 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.997159 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.997275 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.997392 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.997540 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.998272 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.998440 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.998849 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.999140 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.999323 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.999626 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.999798 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 20:29:28 crc kubenswrapper[4983]: I1125 20:29:28.999932 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.000112 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.000228 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.000369 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.009176 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.014860 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.016509 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.017278 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.018037 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.018498 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020194 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwvqp\" (UniqueName: \"kubernetes.io/projected/e842492e-468d-46a1-b4ae-2098daf5e263-kube-api-access-wwvqp\") pod \"downloads-7954f5f757-b2krm\" (UID: \"e842492e-468d-46a1-b4ae-2098daf5e263\") " pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020246 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020277 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-client-ca\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020302 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020326 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-oauth-serving-cert\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020347 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020370 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-config\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020439 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-config\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020462 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-serving-cert\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020482 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d11029b9-9bae-4f73-a448-ae8996511256-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020499 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32b0116d-fe96-4215-a627-49ef66a62147-audit-dir\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020630 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9gtd\" (UniqueName: \"kubernetes.io/projected/a8b5262c-2b08-404a-a884-d5294dcc82be-kube-api-access-w9gtd\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020663 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020694 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnwxz\" (UniqueName: \"kubernetes.io/projected/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-kube-api-access-pnwxz\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020720 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020747 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020778 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-encryption-config\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020814 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-encryption-config\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020844 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567037cb-2605-4e85-9d56-909dab2a8d1d-serving-cert\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020874 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020907 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tvx\" (UniqueName: \"kubernetes.io/projected/d10a20ce-f44b-45b4-b199-759adf792fe0-kube-api-access-g5tvx\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020937 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14c4f68a-8367-414b-87f9-3582c6ec9064-machine-approver-tls\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020972 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b5262c-2b08-404a-a884-d5294dcc82be-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.020995 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmt9\" (UniqueName: \"kubernetes.io/projected/d11029b9-9bae-4f73-a448-ae8996511256-kube-api-access-qcmt9\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021021 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed03db9-cd2b-4aa5-96d4-de0e00e95842-config\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021048 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs22p\" (UniqueName: \"kubernetes.io/projected/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-kube-api-access-vs22p\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021073 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d11029b9-9bae-4f73-a448-ae8996511256-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021089 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-serving-cert\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021110 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021130 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021148 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-dir\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021170 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n85v\" (UniqueName: \"kubernetes.io/projected/deeb38fe-7024-47af-94be-9099f96d6cc9-kube-api-access-8n85v\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021195 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-etcd-client\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021218 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-serving-cert\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021239 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021260 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-etcd-client\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021278 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021296 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32b0116d-fe96-4215-a627-49ef66a62147-node-pullsecrets\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021316 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-serving-cert\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021335 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021354 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-serving-cert\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021375 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-audit-policies\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021396 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51523753-e43c-4a7d-a3a2-412e6ef40670-audit-dir\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021416 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aed03db9-cd2b-4aa5-96d4-de0e00e95842-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021441 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lpxk\" (UniqueName: \"kubernetes.io/projected/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-kube-api-access-2lpxk\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021458 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-serving-cert\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021479 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-client-ca\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021499 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-config\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021522 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-config\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021539 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-trusted-ca-bundle\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021577 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-proxy-tls\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021615 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6l9p\" (UniqueName: \"kubernetes.io/projected/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-kube-api-access-f6l9p\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021634 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d11029b9-9bae-4f73-a448-ae8996511256-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021673 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021693 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021714 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8q6t\" (UniqueName: \"kubernetes.io/projected/2a38d967-78e8-45a1-9093-d24e38d84da7-kube-api-access-g8q6t\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021735 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-config\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021754 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aed03db9-cd2b-4aa5-96d4-de0e00e95842-images\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021773 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftgpc\" (UniqueName: \"kubernetes.io/projected/567037cb-2605-4e85-9d56-909dab2a8d1d-kube-api-access-ftgpc\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021793 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m646\" (UniqueName: \"kubernetes.io/projected/1fff092b-fe51-487a-a6b5-af6ad677ec38-kube-api-access-4m646\") pod \"cluster-samples-operator-665b6dd947-r26wk\" (UID: \"1fff092b-fe51-487a-a6b5-af6ad677ec38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021910 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deeb38fe-7024-47af-94be-9099f96d6cc9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021929 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b5262c-2b08-404a-a884-d5294dcc82be-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021955 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.021986 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-config\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.022094 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q45km\" (UniqueName: \"kubernetes.io/projected/aed03db9-cd2b-4aa5-96d4-de0e00e95842-kube-api-access-q45km\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.022132 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8wj\" (UniqueName: \"kubernetes.io/projected/51523753-e43c-4a7d-a3a2-412e6ef40670-kube-api-access-9r8wj\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.022627 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.022974 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.023895 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.041221 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.041458 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.041582 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.041668 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.041740 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.042258 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.042392 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlgh\" (UniqueName: \"kubernetes.io/projected/32b0116d-fe96-4215-a627-49ef66a62147-kube-api-access-bxlgh\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.042486 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.042730 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.044631 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.045311 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.045706 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.045783 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.045794 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.045950 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.046247 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.046477 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.046660 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.046899 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5q7\" (UniqueName: \"kubernetes.io/projected/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-kube-api-access-bd5q7\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.046931 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c4f68a-8367-414b-87f9-3582c6ec9064-config\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047581 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-audit-policies\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047680 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-audit\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047729 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-image-import-ca\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047762 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047797 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047819 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deeb38fe-7024-47af-94be-9099f96d6cc9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047845 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-etcd-serving-ca\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047870 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a38d967-78e8-45a1-9093-d24e38d84da7-serving-cert\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047888 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14c4f68a-8367-414b-87f9-3582c6ec9064-auth-proxy-config\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047924 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-trusted-ca\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047946 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047962 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-oauth-config\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.047987 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjm75\" (UniqueName: \"kubernetes.io/projected/14c4f68a-8367-414b-87f9-3582c6ec9064-kube-api-access-zjm75\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.048008 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fff092b-fe51-487a-a6b5-af6ad677ec38-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r26wk\" (UID: \"1fff092b-fe51-487a-a6b5-af6ad677ec38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.048030 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2a38d967-78e8-45a1-9093-d24e38d84da7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.048053 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-service-ca\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.048071 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-policies\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.048287 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.048582 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.051090 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vp8dm"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.052152 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.052773 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zql6p"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.053468 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.053957 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.054178 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.054384 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.054607 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.055841 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.055873 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.056152 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.057131 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-etcd-client\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.059945 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.060354 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.060405 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.060501 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.062881 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.063417 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4sz55"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.063982 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.065219 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.066926 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.067858 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.068427 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sw79t"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.068848 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.069754 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.069991 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.070472 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6xxlr"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.070501 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gznhv"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.070858 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.071009 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.071081 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.071262 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.076868 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.083001 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-svs28"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.090254 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvg4v"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.091035 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.092350 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.092734 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.092762 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.094841 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.096647 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.103649 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vd426"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.104191 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.104612 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.108042 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rb7rw"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.113495 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.116503 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-srjk5"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.117142 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.121017 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zwnb"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.121057 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6x4tb"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.121277 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.121444 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lbln6"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.122723 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9zs6k"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.124378 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.125622 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.127143 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g8bfq"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.128679 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.129736 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.130780 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4sz55"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.132772 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.132788 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.134533 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.135925 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ztngk"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.140242 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.142159 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c94zn"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.144753 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.147381 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149045 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdb6c5b-8666-459e-83c6-a783159a20cb-config\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149102 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-encryption-config\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149129 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-encryption-config\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149152 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149214 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv48h\" (UniqueName: \"kubernetes.io/projected/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-kube-api-access-mv48h\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149243 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149331 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tvx\" (UniqueName: \"kubernetes.io/projected/d10a20ce-f44b-45b4-b199-759adf792fe0-kube-api-access-g5tvx\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149154 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vp8dm"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149354 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b5262c-2b08-404a-a884-d5294dcc82be-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149445 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567037cb-2605-4e85-9d56-909dab2a8d1d-serving-cert\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149473 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d11029b9-9bae-4f73-a448-ae8996511256-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149500 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149524 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec554c34-7720-4354-8639-b7b70a2f8894-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149548 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-config\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149590 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149613 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-serving-cert\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149636 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-dir\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149663 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n85v\" (UniqueName: \"kubernetes.io/projected/deeb38fe-7024-47af-94be-9099f96d6cc9-kube-api-access-8n85v\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149685 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-serving-cert\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149709 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntt5j\" (UniqueName: \"kubernetes.io/projected/e07434b8-dec6-40ac-b297-d1dcff926553-kube-api-access-ntt5j\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149731 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-etcd-client\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149750 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-serving-cert\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149769 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149789 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aed03db9-cd2b-4aa5-96d4-de0e00e95842-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149811 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bdb6c5b-8666-459e-83c6-a783159a20cb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149828 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec554c34-7720-4354-8639-b7b70a2f8894-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149845 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-serving-cert\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149863 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-config\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149881 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9de5f7b4-7675-4e35-a273-8a68c4d127e1-serving-cert\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149902 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d11029b9-9bae-4f73-a448-ae8996511256-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149925 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149970 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8q6t\" (UniqueName: \"kubernetes.io/projected/2a38d967-78e8-45a1-9093-d24e38d84da7-kube-api-access-g8q6t\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.149987 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aed03db9-cd2b-4aa5-96d4-de0e00e95842-images\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150009 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdx9\" (UniqueName: \"kubernetes.io/projected/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-kube-api-access-qmdx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150034 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftgpc\" (UniqueName: \"kubernetes.io/projected/567037cb-2605-4e85-9d56-909dab2a8d1d-kube-api-access-ftgpc\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150053 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m646\" (UniqueName: \"kubernetes.io/projected/1fff092b-fe51-487a-a6b5-af6ad677ec38-kube-api-access-4m646\") pod \"cluster-samples-operator-665b6dd947-r26wk\" (UID: \"1fff092b-fe51-487a-a6b5-af6ad677ec38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150072 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deeb38fe-7024-47af-94be-9099f96d6cc9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150091 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-config\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150115 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8wj\" (UniqueName: \"kubernetes.io/projected/51523753-e43c-4a7d-a3a2-412e6ef40670-kube-api-access-9r8wj\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150137 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c4f68a-8367-414b-87f9-3582c6ec9064-config\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150153 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlgh\" (UniqueName: \"kubernetes.io/projected/32b0116d-fe96-4215-a627-49ef66a62147-kube-api-access-bxlgh\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150170 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150190 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150207 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150225 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e07434b8-dec6-40ac-b297-d1dcff926553-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150246 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-oauth-config\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150262 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/61a26a3c-422b-4596-821e-fb0d287ce966-profile-collector-cert\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150281 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fff092b-fe51-487a-a6b5-af6ad677ec38-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r26wk\" (UID: \"1fff092b-fe51-487a-a6b5-af6ad677ec38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150296 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-service-ca\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150316 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjm75\" (UniqueName: \"kubernetes.io/projected/14c4f68a-8367-414b-87f9-3582c6ec9064-kube-api-access-zjm75\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150339 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-policies\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150356 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150371 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-oauth-serving-cert\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150393 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150411 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-config\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150427 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-serving-cert\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150447 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d11029b9-9bae-4f73-a448-ae8996511256-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150466 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32b0116d-fe96-4215-a627-49ef66a62147-audit-dir\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150484 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e07434b8-dec6-40ac-b297-d1dcff926553-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150503 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9gtd\" (UniqueName: \"kubernetes.io/projected/a8b5262c-2b08-404a-a884-d5294dcc82be-kube-api-access-w9gtd\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150521 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150539 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e07434b8-dec6-40ac-b297-d1dcff926553-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150585 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnwxz\" (UniqueName: \"kubernetes.io/projected/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-kube-api-access-pnwxz\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150606 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150628 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85269fe1-85ed-4c9d-9864-068497a85668-cert\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150647 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14c4f68a-8367-414b-87f9-3582c6ec9064-machine-approver-tls\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150674 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmt9\" (UniqueName: \"kubernetes.io/projected/d11029b9-9bae-4f73-a448-ae8996511256-kube-api-access-qcmt9\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150690 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed03db9-cd2b-4aa5-96d4-de0e00e95842-config\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150706 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7nmw\" (UniqueName: \"kubernetes.io/projected/85269fe1-85ed-4c9d-9864-068497a85668-kube-api-access-n7nmw\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150726 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs22p\" (UniqueName: \"kubernetes.io/projected/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-kube-api-access-vs22p\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150743 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9t2s\" (UniqueName: \"kubernetes.io/projected/61a26a3c-422b-4596-821e-fb0d287ce966-kube-api-access-p9t2s\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150761 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkcj7\" (UniqueName: \"kubernetes.io/projected/69a54a85-daaa-41f3-8590-99dc51245879-kube-api-access-lkcj7\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150781 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150796 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec554c34-7720-4354-8639-b7b70a2f8894-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150815 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150834 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150866 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32b0116d-fe96-4215-a627-49ef66a62147-node-pullsecrets\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150884 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150905 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-serving-cert\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150923 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51523753-e43c-4a7d-a3a2-412e6ef40670-audit-dir\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150947 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150966 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-service-ca\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.150985 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-ca\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151003 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbspl\" (UniqueName: \"kubernetes.io/projected/9de5f7b4-7675-4e35-a273-8a68c4d127e1-kube-api-access-vbspl\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151021 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-client-ca\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151038 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-config\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151054 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lpxk\" (UniqueName: \"kubernetes.io/projected/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-kube-api-access-2lpxk\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151070 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-trusted-ca-bundle\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151087 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-proxy-tls\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151119 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6l9p\" (UniqueName: \"kubernetes.io/projected/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-kube-api-access-f6l9p\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151139 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151159 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-config\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151177 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151195 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q45km\" (UniqueName: \"kubernetes.io/projected/aed03db9-cd2b-4aa5-96d4-de0e00e95842-kube-api-access-q45km\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151215 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b5262c-2b08-404a-a884-d5294dcc82be-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151234 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfp7\" (UniqueName: \"kubernetes.io/projected/50f76fe0-cc37-4a22-bb1a-7df5d6012224-kube-api-access-xlfp7\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151253 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151273 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5q7\" (UniqueName: \"kubernetes.io/projected/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-kube-api-access-bd5q7\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151295 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-audit\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151313 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-image-import-ca\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151331 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deeb38fe-7024-47af-94be-9099f96d6cc9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151351 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdb6c5b-8666-459e-83c6-a783159a20cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151371 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-etcd-serving-ca\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151393 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a38d967-78e8-45a1-9093-d24e38d84da7-serving-cert\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151413 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14c4f68a-8367-414b-87f9-3582c6ec9064-auth-proxy-config\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151431 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-trusted-ca\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151448 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151465 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2a38d967-78e8-45a1-9093-d24e38d84da7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151485 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-client-ca\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151502 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwvqp\" (UniqueName: \"kubernetes.io/projected/e842492e-468d-46a1-b4ae-2098daf5e263-kube-api-access-wwvqp\") pod \"downloads-7954f5f757-b2krm\" (UID: \"e842492e-468d-46a1-b4ae-2098daf5e263\") " pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151520 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151538 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-client\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151566 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/61a26a3c-422b-4596-821e-fb0d287ce966-srv-cert\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151584 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-config\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151602 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a54a85-daaa-41f3-8590-99dc51245879-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.151735 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-encryption-config\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.152361 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-service-ca-bundle\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.152395 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.152982 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.153134 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-service-ca\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.153350 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8b5262c-2b08-404a-a884-d5294dcc82be-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.153464 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-client-ca\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.153704 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.154049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.154919 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aed03db9-cd2b-4aa5-96d4-de0e00e95842-config\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.155022 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.155181 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/32b0116d-fe96-4215-a627-49ef66a62147-node-pullsecrets\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.155268 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-config\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.155375 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.155641 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14c4f68a-8367-414b-87f9-3582c6ec9064-machine-approver-tls\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.156004 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.156310 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-audit\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.156755 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-config\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.156817 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-oauth-config\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.156831 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8b5262c-2b08-404a-a884-d5294dcc82be-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.157129 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-image-import-ca\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.157151 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-etcd-serving-ca\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.157696 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.158043 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.158435 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-config\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.158493 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d11029b9-9bae-4f73-a448-ae8996511256-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.158931 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-trusted-ca\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.158974 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51523753-e43c-4a7d-a3a2-412e6ef40670-audit-dir\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.159028 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/32b0116d-fe96-4215-a627-49ef66a62147-audit-dir\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.159170 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-policies\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.160869 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-trusted-ca-bundle\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.160879 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-oauth-serving-cert\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.161790 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deeb38fe-7024-47af-94be-9099f96d6cc9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.162046 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-encryption-config\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.162260 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-dir\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.162351 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567037cb-2605-4e85-9d56-909dab2a8d1d-serving-cert\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.162362 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aed03db9-cd2b-4aa5-96d4-de0e00e95842-images\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.162498 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14c4f68a-8367-414b-87f9-3582c6ec9064-auth-proxy-config\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.162922 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6fm9n"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164220 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-config\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164234 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164301 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164583 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2a38d967-78e8-45a1-9093-d24e38d84da7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164717 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164764 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14c4f68a-8367-414b-87f9-3582c6ec9064-config\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164776 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.164899 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.165218 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567037cb-2605-4e85-9d56-909dab2a8d1d-config\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.165439 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.165470 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-proxy-tls\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.165853 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-client-ca\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.165947 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fc95k"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.165991 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deeb38fe-7024-47af-94be-9099f96d6cc9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.166026 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.166157 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51523753-e43c-4a7d-a3a2-412e6ef40670-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.166363 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32b0116d-fe96-4215-a627-49ef66a62147-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.166967 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fff092b-fe51-487a-a6b5-af6ad677ec38-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r26wk\" (UID: \"1fff092b-fe51-487a-a6b5-af6ad677ec38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.167179 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.167234 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-config\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.167259 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.167476 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-srjk5"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.167873 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/aed03db9-cd2b-4aa5-96d4-de0e00e95842-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.168325 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.168417 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-serving-cert\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.168525 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-etcd-client\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.168694 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-svs28"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.169304 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51523753-e43c-4a7d-a3a2-412e6ef40670-serving-cert\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.169864 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-serving-cert\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.170683 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sw79t"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.170826 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32b0116d-fe96-4215-a627-49ef66a62147-serving-cert\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.171995 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.172188 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d11029b9-9bae-4f73-a448-ae8996511256-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.176718 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.176819 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.176837 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-serving-cert\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.179452 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-serving-cert\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.181525 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.181586 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-b2krm"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.181961 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.184308 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gznhv"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.184372 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fc95k"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.191770 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvg4v"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.192306 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.193238 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vd426"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.194537 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.197195 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.197246 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a38d967-78e8-45a1-9093-d24e38d84da7-serving-cert\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.197792 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.199478 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.200015 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.200594 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rb7rw"] Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.212463 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252407 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-service-ca\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252454 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252489 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbspl\" (UniqueName: \"kubernetes.io/projected/9de5f7b4-7675-4e35-a273-8a68c4d127e1-kube-api-access-vbspl\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252523 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-ca\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252601 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlfp7\" (UniqueName: \"kubernetes.io/projected/50f76fe0-cc37-4a22-bb1a-7df5d6012224-kube-api-access-xlfp7\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252631 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252666 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdb6c5b-8666-459e-83c6-a783159a20cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252703 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-client\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252726 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/61a26a3c-422b-4596-821e-fb0d287ce966-srv-cert\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252750 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a54a85-daaa-41f3-8590-99dc51245879-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252777 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdb6c5b-8666-459e-83c6-a783159a20cb-config\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252809 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv48h\" (UniqueName: \"kubernetes.io/projected/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-kube-api-access-mv48h\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252845 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252873 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec554c34-7720-4354-8639-b7b70a2f8894-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252899 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-config\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252941 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntt5j\" (UniqueName: \"kubernetes.io/projected/e07434b8-dec6-40ac-b297-d1dcff926553-kube-api-access-ntt5j\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252974 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bdb6c5b-8666-459e-83c6-a783159a20cb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.252997 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec554c34-7720-4354-8639-b7b70a2f8894-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253025 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9de5f7b4-7675-4e35-a273-8a68c4d127e1-serving-cert\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253063 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmdx9\" (UniqueName: \"kubernetes.io/projected/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-kube-api-access-qmdx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253124 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253149 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e07434b8-dec6-40ac-b297-d1dcff926553-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253172 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/61a26a3c-422b-4596-821e-fb0d287ce966-profile-collector-cert\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253217 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e07434b8-dec6-40ac-b297-d1dcff926553-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253253 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e07434b8-dec6-40ac-b297-d1dcff926553-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253287 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85269fe1-85ed-4c9d-9864-068497a85668-cert\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253325 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7nmw\" (UniqueName: \"kubernetes.io/projected/85269fe1-85ed-4c9d-9864-068497a85668-kube-api-access-n7nmw\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253349 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253360 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9t2s\" (UniqueName: \"kubernetes.io/projected/61a26a3c-422b-4596-821e-fb0d287ce966-kube-api-access-p9t2s\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253590 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkcj7\" (UniqueName: \"kubernetes.io/projected/69a54a85-daaa-41f3-8590-99dc51245879-kube-api-access-lkcj7\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253623 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec554c34-7720-4354-8639-b7b70a2f8894-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.253646 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.257194 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/61a26a3c-422b-4596-821e-fb0d287ce966-profile-collector-cert\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.272402 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.292374 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.312038 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.331953 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.353170 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.372915 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.392232 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.412281 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.431723 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.452508 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.473105 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.492841 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.512989 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.535313 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.555454 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.562106 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdb6c5b-8666-459e-83c6-a783159a20cb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.572726 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.576745 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec554c34-7720-4354-8639-b7b70a2f8894-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.593232 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.614170 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.632254 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.633878 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bdb6c5b-8666-459e-83c6-a783159a20cb-config\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.652273 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.673625 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.695099 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.712726 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.714780 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec554c34-7720-4354-8639-b7b70a2f8894-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.732370 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.752742 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.772238 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.777420 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.793124 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.794492 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.812527 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.831986 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.852141 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.871792 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.892440 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.912230 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.916229 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/61a26a3c-422b-4596-821e-fb0d287ce966-srv-cert\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.934024 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.952698 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.972895 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.978865 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e07434b8-dec6-40ac-b297-d1dcff926553-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:29 crc kubenswrapper[4983]: I1125 20:29:29.992809 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.026800 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.032685 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.037057 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e07434b8-dec6-40ac-b297-d1dcff926553-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.052588 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.072716 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.082067 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9de5f7b4-7675-4e35-a273-8a68c4d127e1-serving-cert\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.092357 4983 request.go:700] Waited for 1.020620247s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0 Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.094478 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.107182 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-client\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.112688 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.114381 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-config\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.133333 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.153417 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.163410 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-ca\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.172720 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.184019 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9de5f7b4-7675-4e35-a273-8a68c4d127e1-etcd-service-ca\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.193053 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.213452 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.233673 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.252724 4983 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.252973 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-control-plane-machine-set-operator-tls podName:ec800216-1c1b-4324-a1be-2a0c5dcc6ce5 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:30.752947447 +0000 UTC m=+151.865480849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-d2mch" (UID: "ec800216-1c1b-4324-a1be-2a0c5dcc6ce5") : failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.253009 4983 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.253195 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69a54a85-daaa-41f3-8590-99dc51245879-package-server-manager-serving-cert podName:69a54a85-daaa-41f3-8590-99dc51245879 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:30.753182283 +0000 UTC m=+151.865715685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/69a54a85-daaa-41f3-8590-99dc51245879-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-fd4hc" (UID: "69a54a85-daaa-41f3-8590-99dc51245879") : failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.253035 4983 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.253290 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.253389 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics podName:50f76fe0-cc37-4a22-bb1a-7df5d6012224 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:30.753377168 +0000 UTC m=+151.865910570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics") pod "marketplace-operator-79b997595-cvg4v" (UID: "50f76fe0-cc37-4a22-bb1a-7df5d6012224") : failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.254173 4983 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.254388 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca podName:50f76fe0-cc37-4a22-bb1a-7df5d6012224 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:30.754330773 +0000 UTC m=+151.866864215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca") pod "marketplace-operator-79b997595-cvg4v" (UID: "50f76fe0-cc37-4a22-bb1a-7df5d6012224") : failed to sync configmap cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.254502 4983 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: E1125 20:29:30.254658 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85269fe1-85ed-4c9d-9864-068497a85668-cert podName:85269fe1-85ed-4c9d-9864-068497a85668 nodeName:}" failed. No retries permitted until 2025-11-25 20:29:30.754643651 +0000 UTC m=+151.867177053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/85269fe1-85ed-4c9d-9864-068497a85668-cert") pod "ingress-canary-srjk5" (UID: "85269fe1-85ed-4c9d-9864-068497a85668") : failed to sync secret cache: timed out waiting for the condition Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.272529 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.308674 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.314688 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.333395 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.361013 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.372584 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.392827 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.413126 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.432301 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.452744 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.473224 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.493592 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.512728 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.534126 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.552501 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.573130 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.592762 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.613516 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.633419 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.653156 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.673100 4983 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.692358 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.712463 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.733318 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.753052 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.772377 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.776780 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a54a85-daaa-41f3-8590-99dc51245879-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.776892 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.777013 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.777083 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85269fe1-85ed-4c9d-9864-068497a85668-cert\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.777150 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.779725 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.781304 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.782141 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.782739 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a54a85-daaa-41f3-8590-99dc51245879-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.783867 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85269fe1-85ed-4c9d-9864-068497a85668-cert\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.827662 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjm75\" (UniqueName: \"kubernetes.io/projected/14c4f68a-8367-414b-87f9-3582c6ec9064-kube-api-access-zjm75\") pod \"machine-approver-56656f9798-zzlhs\" (UID: \"14c4f68a-8367-414b-87f9-3582c6ec9064\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.846707 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5q7\" (UniqueName: \"kubernetes.io/projected/498e7b96-97c7-43f6-ba6d-cbc03ea9e543-kube-api-access-bd5q7\") pod \"console-operator-58897d9998-8zwnb\" (UID: \"498e7b96-97c7-43f6-ba6d-cbc03ea9e543\") " pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.866149 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tvx\" (UniqueName: \"kubernetes.io/projected/d10a20ce-f44b-45b4-b199-759adf792fe0-kube-api-access-g5tvx\") pod \"oauth-openshift-558db77b4-9zs6k\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.892418 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmt9\" (UniqueName: \"kubernetes.io/projected/d11029b9-9bae-4f73-a448-ae8996511256-kube-api-access-qcmt9\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.897719 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.909705 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs22p\" (UniqueName: \"kubernetes.io/projected/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-kube-api-access-vs22p\") pod \"controller-manager-879f6c89f-6x4tb\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.928271 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q45km\" (UniqueName: \"kubernetes.io/projected/aed03db9-cd2b-4aa5-96d4-de0e00e95842-kube-api-access-q45km\") pod \"machine-api-operator-5694c8668f-ztngk\" (UID: \"aed03db9-cd2b-4aa5-96d4-de0e00e95842\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.948345 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lpxk\" (UniqueName: \"kubernetes.io/projected/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-kube-api-access-2lpxk\") pod \"console-f9d7485db-g8bfq\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.956407 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.966110 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9gtd\" (UniqueName: \"kubernetes.io/projected/a8b5262c-2b08-404a-a884-d5294dcc82be-kube-api-access-w9gtd\") pod \"openshift-apiserver-operator-796bbdcf4f-rkw88\" (UID: \"a8b5262c-2b08-404a-a884-d5294dcc82be\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:30 crc kubenswrapper[4983]: I1125 20:29:30.988418 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6l9p\" (UniqueName: \"kubernetes.io/projected/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-kube-api-access-f6l9p\") pod \"route-controller-manager-6576b87f9c-l7zvq\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.006417 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8q6t\" (UniqueName: \"kubernetes.io/projected/2a38d967-78e8-45a1-9093-d24e38d84da7-kube-api-access-g8q6t\") pod \"openshift-config-operator-7777fb866f-c94zn\" (UID: \"2a38d967-78e8-45a1-9093-d24e38d84da7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.028826 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftgpc\" (UniqueName: \"kubernetes.io/projected/567037cb-2605-4e85-9d56-909dab2a8d1d-kube-api-access-ftgpc\") pod \"authentication-operator-69f744f599-lbln6\" (UID: \"567037cb-2605-4e85-9d56-909dab2a8d1d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.042644 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.051578 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m646\" (UniqueName: \"kubernetes.io/projected/1fff092b-fe51-487a-a6b5-af6ad677ec38-kube-api-access-4m646\") pod \"cluster-samples-operator-665b6dd947-r26wk\" (UID: \"1fff092b-fe51-487a-a6b5-af6ad677ec38\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.057973 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.078144 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnwxz\" (UniqueName: \"kubernetes.io/projected/1e36d951-df6e-4b44-a4b5-4aaa3daefe75-kube-api-access-pnwxz\") pod \"machine-config-controller-84d6567774-c8p67\" (UID: \"1e36d951-df6e-4b44-a4b5-4aaa3daefe75\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.088577 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n85v\" (UniqueName: \"kubernetes.io/projected/deeb38fe-7024-47af-94be-9099f96d6cc9-kube-api-access-8n85v\") pod \"openshift-controller-manager-operator-756b6f6bc6-2ttdh\" (UID: \"deeb38fe-7024-47af-94be-9099f96d6cc9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.110613 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.110629 4983 request.go:700] Waited for 1.947106449s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.112075 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwvqp\" (UniqueName: \"kubernetes.io/projected/e842492e-468d-46a1-b4ae-2098daf5e263-kube-api-access-wwvqp\") pod \"downloads-7954f5f757-b2krm\" (UID: \"e842492e-468d-46a1-b4ae-2098daf5e263\") " pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.125123 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.128278 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8wj\" (UniqueName: \"kubernetes.io/projected/51523753-e43c-4a7d-a3a2-412e6ef40670-kube-api-access-9r8wj\") pod \"apiserver-7bbb656c7d-j79zm\" (UID: \"51523753-e43c-4a7d-a3a2-412e6ef40670\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.137575 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.137644 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-ztngk"] Nov 25 20:29:31 crc kubenswrapper[4983]: W1125 20:29:31.145697 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaed03db9_cd2b_4aa5_96d4_de0e00e95842.slice/crio-9c03c451774bac512ed996d960f56af2a6b959d556a2881168f03a7e80bbbb06 WatchSource:0}: Error finding container 9c03c451774bac512ed996d960f56af2a6b959d556a2881168f03a7e80bbbb06: Status 404 returned error can't find the container with id 9c03c451774bac512ed996d960f56af2a6b959d556a2881168f03a7e80bbbb06 Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.151519 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.155308 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.173180 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.179013 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.212126 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlgh\" (UniqueName: \"kubernetes.io/projected/32b0116d-fe96-4215-a627-49ef66a62147-kube-api-access-bxlgh\") pod \"apiserver-76f77b778f-6xxlr\" (UID: \"32b0116d-fe96-4215-a627-49ef66a62147\") " pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.214196 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.226512 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.230400 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d11029b9-9bae-4f73-a448-ae8996511256-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xpk5j\" (UID: \"d11029b9-9bae-4f73-a448-ae8996511256\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.232051 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.239455 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.242824 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.252133 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.274855 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.283907 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8zwnb"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.317716 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.320006 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6x4tb"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.330435 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.333152 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlfp7\" (UniqueName: \"kubernetes.io/projected/50f76fe0-cc37-4a22-bb1a-7df5d6012224-kube-api-access-xlfp7\") pod \"marketplace-operator-79b997595-cvg4v\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.337275 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.337520 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.358114 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbspl\" (UniqueName: \"kubernetes.io/projected/9de5f7b4-7675-4e35-a273-8a68c4d127e1-kube-api-access-vbspl\") pod \"etcd-operator-b45778765-sw79t\" (UID: \"9de5f7b4-7675-4e35-a273-8a68c4d127e1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.379034 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.379276 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv48h\" (UniqueName: \"kubernetes.io/projected/ec800216-1c1b-4324-a1be-2a0c5dcc6ce5-kube-api-access-mv48h\") pod \"control-plane-machine-set-operator-78cbb6b69f-d2mch\" (UID: \"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.381084 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88"] Nov 25 20:29:31 crc kubenswrapper[4983]: W1125 20:29:31.396596 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98915ddf_6a6b_4c4e_a8b5_379567bbbf09.slice/crio-1cda094cb646798437381da0b9458c1836b05eb0bbebf6eade0b1cb066aef936 WatchSource:0}: Error finding container 1cda094cb646798437381da0b9458c1836b05eb0bbebf6eade0b1cb066aef936: Status 404 returned error can't find the container with id 1cda094cb646798437381da0b9458c1836b05eb0bbebf6eade0b1cb066aef936 Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.397427 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntt5j\" (UniqueName: \"kubernetes.io/projected/e07434b8-dec6-40ac-b297-d1dcff926553-kube-api-access-ntt5j\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.410999 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9t2s\" (UniqueName: \"kubernetes.io/projected/61a26a3c-422b-4596-821e-fb0d287ce966-kube-api-access-p9t2s\") pod \"catalog-operator-68c6474976-qrplb\" (UID: \"61a26a3c-422b-4596-821e-fb0d287ce966\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:31 crc kubenswrapper[4983]: W1125 20:29:31.421355 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b5262c_2b08_404a_a884_d5294dcc82be.slice/crio-d7a9ad89c21a6a094d2a739f815dbde2c81a093be0252e4f1d332a566a1b00b7 WatchSource:0}: Error finding container d7a9ad89c21a6a094d2a739f815dbde2c81a093be0252e4f1d332a566a1b00b7: Status 404 returned error can't find the container with id d7a9ad89c21a6a094d2a739f815dbde2c81a093be0252e4f1d332a566a1b00b7 Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.431676 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bdb6c5b-8666-459e-83c6-a783159a20cb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b4hzt\" (UID: \"3bdb6c5b-8666-459e-83c6-a783159a20cb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.445591 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9zs6k"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.446298 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.447345 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec554c34-7720-4354-8639-b7b70a2f8894-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-59vxf\" (UID: \"ec554c34-7720-4354-8639-b7b70a2f8894\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.472943 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.479503 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmdx9\" (UniqueName: \"kubernetes.io/projected/6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e-kube-api-access-qmdx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-l2gbx\" (UID: \"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.485018 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.489292 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.496390 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkcj7\" (UniqueName: \"kubernetes.io/projected/69a54a85-daaa-41f3-8590-99dc51245879-kube-api-access-lkcj7\") pod \"package-server-manager-789f6589d5-fd4hc\" (UID: \"69a54a85-daaa-41f3-8590-99dc51245879\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.507219 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.524272 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7nmw\" (UniqueName: \"kubernetes.io/projected/85269fe1-85ed-4c9d-9864-068497a85668-kube-api-access-n7nmw\") pod \"ingress-canary-srjk5\" (UID: \"85269fe1-85ed-4c9d-9864-068497a85668\") " pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.526767 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e07434b8-dec6-40ac-b297-d1dcff926553-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fccn\" (UID: \"e07434b8-dec6-40ac-b297-d1dcff926553\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.561578 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-srjk5" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.593529 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-c94zn"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600278 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-certificates\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600321 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab247bf3-165b-4513-ad09-b33ce8fc15a8-config-volume\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600351 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ltb5\" (UniqueName: \"kubernetes.io/projected/20acccdd-eff5-4128-a24e-c7c6aa9e4fd9-kube-api-access-5ltb5\") pod \"migrator-59844c95c7-wcs4w\" (UID: \"20acccdd-eff5-4128-a24e-c7c6aa9e4fd9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600392 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-csi-data-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600417 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600458 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr625\" (UniqueName: \"kubernetes.io/projected/5920c5fd-c1ab-4729-8dd1-8df4ee246684-kube-api-access-qr625\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600479 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6dcq\" (UniqueName: \"kubernetes.io/projected/6845406f-45aa-4abf-b2ea-729513677ab8-kube-api-access-c6dcq\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600506 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b196bf-3941-451e-818c-61af0664a204-config\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600538 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-images\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600588 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-trusted-ca\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600610 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmd6\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-kube-api-access-rqmd6\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600634 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-proxy-tls\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600658 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6bb719d-969b-4b39-8495-85a5e91123a6-signing-key\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600685 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljzm\" (UniqueName: \"kubernetes.io/projected/39878b72-74a2-4043-8dd6-b195b7030bfe-kube-api-access-wljzm\") pod \"multus-admission-controller-857f4d67dd-vp8dm\" (UID: \"39878b72-74a2-4043-8dd6-b195b7030bfe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600708 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-tls\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600739 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665g6\" (UniqueName: \"kubernetes.io/projected/97b196bf-3941-451e-818c-61af0664a204-kube-api-access-665g6\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600797 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-default-certificate\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600831 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409a937d-29d5-426e-9aba-51c8e44b387a-config\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600850 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-plugins-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600870 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnqm\" (UniqueName: \"kubernetes.io/projected/f6bb719d-969b-4b39-8495-85a5e91123a6-kube-api-access-znnqm\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600892 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-bound-sa-token\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600914 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6bb719d-969b-4b39-8495-85a5e91123a6-signing-cabundle\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600935 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5920c5fd-c1ab-4729-8dd1-8df4ee246684-apiservice-cert\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600956 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q6h9\" (UniqueName: \"kubernetes.io/projected/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-kube-api-access-2q6h9\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.600991 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-metrics-certs\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601038 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601077 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/409a937d-29d5-426e-9aba-51c8e44b387a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601099 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/409a937d-29d5-426e-9aba-51c8e44b387a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601129 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rl7\" (UniqueName: \"kubernetes.io/projected/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-kube-api-access-r4rl7\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601165 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-service-ca-bundle\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601188 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-socket-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601210 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kj68\" (UniqueName: \"kubernetes.io/projected/b18fa273-31a7-4818-a3bc-e0311d4f6bde-kube-api-access-7kj68\") pod \"dns-operator-744455d44c-4sz55\" (UID: \"b18fa273-31a7-4818-a3bc-e0311d4f6bde\") " pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601233 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/da31d090-0806-4ef1-bd92-c64e2b1795d8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601279 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb434a7b-12ca-4505-b66c-5d5bf4178d12-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601297 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/da31d090-0806-4ef1-bd92-c64e2b1795d8-srv-cert\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601331 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w9cm\" (UniqueName: \"kubernetes.io/projected/da31d090-0806-4ef1-bd92-c64e2b1795d8-kube-api-access-8w9cm\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601352 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5920c5fd-c1ab-4729-8dd1-8df4ee246684-webhook-cert\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601375 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnlf\" (UniqueName: \"kubernetes.io/projected/ab247bf3-165b-4513-ad09-b33ce8fc15a8-kube-api-access-7hnlf\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601399 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-mountpoint-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601479 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b18fa273-31a7-4818-a3bc-e0311d4f6bde-metrics-tls\") pod \"dns-operator-744455d44c-4sz55\" (UID: \"b18fa273-31a7-4818-a3bc-e0311d4f6bde\") " pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601501 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b196bf-3941-451e-818c-61af0664a204-serving-cert\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601522 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab247bf3-165b-4513-ad09-b33ce8fc15a8-secret-volume\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601622 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-registration-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601649 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb434a7b-12ca-4505-b66c-5d5bf4178d12-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601685 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5920c5fd-c1ab-4729-8dd1-8df4ee246684-tmpfs\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601718 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/39878b72-74a2-4043-8dd6-b195b7030bfe-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vp8dm\" (UID: \"39878b72-74a2-4043-8dd6-b195b7030bfe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.601750 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-stats-auth\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: E1125 20:29:31.604397 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.10438178 +0000 UTC m=+153.216915172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.683935 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-b2krm"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.701089 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.701128 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g8bfq"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.705660 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:31 crc kubenswrapper[4983]: E1125 20:29:31.705765 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.205748884 +0000 UTC m=+153.318282276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.705927 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb434a7b-12ca-4505-b66c-5d5bf4178d12-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.705951 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkthl\" (UniqueName: \"kubernetes.io/projected/5092e350-79ed-419f-8e96-d6e5c9430b64-kube-api-access-fkthl\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.712645 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.722860 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5920c5fd-c1ab-4729-8dd1-8df4ee246684-tmpfs\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723025 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/39878b72-74a2-4043-8dd6-b195b7030bfe-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vp8dm\" (UID: \"39878b72-74a2-4043-8dd6-b195b7030bfe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723287 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-stats-auth\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723319 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-certificates\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723341 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab247bf3-165b-4513-ad09-b33ce8fc15a8-config-volume\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723484 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ltb5\" (UniqueName: \"kubernetes.io/projected/20acccdd-eff5-4128-a24e-c7c6aa9e4fd9-kube-api-access-5ltb5\") pod \"migrator-59844c95c7-wcs4w\" (UID: \"20acccdd-eff5-4128-a24e-c7c6aa9e4fd9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723546 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723788 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-csi-data-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723846 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5920c5fd-c1ab-4729-8dd1-8df4ee246684-tmpfs\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.723581 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-csi-data-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724038 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5092e350-79ed-419f-8e96-d6e5c9430b64-certs\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724105 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724175 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr625\" (UniqueName: \"kubernetes.io/projected/5920c5fd-c1ab-4729-8dd1-8df4ee246684-kube-api-access-qr625\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724209 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6dcq\" (UniqueName: \"kubernetes.io/projected/6845406f-45aa-4abf-b2ea-729513677ab8-kube-api-access-c6dcq\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724235 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09736212-5e5c-42ba-a184-b8b1f0f0a67c-config-volume\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724330 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5092e350-79ed-419f-8e96-d6e5c9430b64-node-bootstrap-token\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724494 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b196bf-3941-451e-818c-61af0664a204-config\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724630 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-images\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.724687 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ds5\" (UniqueName: \"kubernetes.io/projected/09736212-5e5c-42ba-a184-b8b1f0f0a67c-kube-api-access-m6ds5\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.725042 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-trusted-ca\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.725088 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmd6\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-kube-api-access-rqmd6\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.725157 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09736212-5e5c-42ba-a184-b8b1f0f0a67c-metrics-tls\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.725189 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-proxy-tls\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.725233 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6bb719d-969b-4b39-8495-85a5e91123a6-signing-key\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.725748 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.726242 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljzm\" (UniqueName: \"kubernetes.io/projected/39878b72-74a2-4043-8dd6-b195b7030bfe-kube-api-access-wljzm\") pod \"multus-admission-controller-857f4d67dd-vp8dm\" (UID: \"39878b72-74a2-4043-8dd6-b195b7030bfe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.726725 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-tls\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.726763 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665g6\" (UniqueName: \"kubernetes.io/projected/97b196bf-3941-451e-818c-61af0664a204-kube-api-access-665g6\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.726771 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb434a7b-12ca-4505-b66c-5d5bf4178d12-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.727705 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b196bf-3941-451e-818c-61af0664a204-config\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.729064 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-images\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.730484 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-trusted-ca\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.730736 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab247bf3-165b-4513-ad09-b33ce8fc15a8-config-volume\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.730834 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.731009 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-certificates\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.733049 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-lbln6"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.733108 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-default-certificate\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.734023 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409a937d-29d5-426e-9aba-51c8e44b387a-config\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.734138 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-plugins-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.734507 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnqm\" (UniqueName: \"kubernetes.io/projected/f6bb719d-969b-4b39-8495-85a5e91123a6-kube-api-access-znnqm\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.734607 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-bound-sa-token\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.734837 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-plugins-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.734913 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6bb719d-969b-4b39-8495-85a5e91123a6-signing-cabundle\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.735012 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5920c5fd-c1ab-4729-8dd1-8df4ee246684-apiservice-cert\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.735078 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409a937d-29d5-426e-9aba-51c8e44b387a-config\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.735346 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.737375 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f6bb719d-969b-4b39-8495-85a5e91123a6-signing-cabundle\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.737701 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q6h9\" (UniqueName: \"kubernetes.io/projected/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-kube-api-access-2q6h9\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.737811 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-proxy-tls\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.738138 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5920c5fd-c1ab-4729-8dd1-8df4ee246684-apiservice-cert\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.738338 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-metrics-certs\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.738974 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.739107 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-tls\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.739178 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-default-certificate\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: E1125 20:29:31.739288 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.239269325 +0000 UTC m=+153.351802717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.739493 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/409a937d-29d5-426e-9aba-51c8e44b387a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.739714 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/409a937d-29d5-426e-9aba-51c8e44b387a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.739772 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rl7\" (UniqueName: \"kubernetes.io/projected/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-kube-api-access-r4rl7\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740026 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-service-ca-bundle\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740538 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-socket-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740605 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kj68\" (UniqueName: \"kubernetes.io/projected/b18fa273-31a7-4818-a3bc-e0311d4f6bde-kube-api-access-7kj68\") pod \"dns-operator-744455d44c-4sz55\" (UID: \"b18fa273-31a7-4818-a3bc-e0311d4f6bde\") " pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740643 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/da31d090-0806-4ef1-bd92-c64e2b1795d8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740764 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-service-ca-bundle\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740813 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb434a7b-12ca-4505-b66c-5d5bf4178d12-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.740852 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/da31d090-0806-4ef1-bd92-c64e2b1795d8-srv-cert\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.741118 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-socket-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.741409 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb434a7b-12ca-4505-b66c-5d5bf4178d12-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.741834 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w9cm\" (UniqueName: \"kubernetes.io/projected/da31d090-0806-4ef1-bd92-c64e2b1795d8-kube-api-access-8w9cm\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.741931 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5920c5fd-c1ab-4729-8dd1-8df4ee246684-webhook-cert\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.741959 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnlf\" (UniqueName: \"kubernetes.io/projected/ab247bf3-165b-4513-ad09-b33ce8fc15a8-kube-api-access-7hnlf\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.741999 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-mountpoint-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.742067 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b18fa273-31a7-4818-a3bc-e0311d4f6bde-metrics-tls\") pod \"dns-operator-744455d44c-4sz55\" (UID: \"b18fa273-31a7-4818-a3bc-e0311d4f6bde\") " pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.742090 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b196bf-3941-451e-818c-61af0664a204-serving-cert\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.742124 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab247bf3-165b-4513-ad09-b33ce8fc15a8-secret-volume\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.742170 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-registration-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.743941 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/da31d090-0806-4ef1-bd92-c64e2b1795d8-srv-cert\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.744987 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-registration-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.745894 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6845406f-45aa-4abf-b2ea-729513677ab8-mountpoint-dir\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.747766 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab247bf3-165b-4513-ad09-b33ce8fc15a8-secret-volume\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.749082 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.750236 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b18fa273-31a7-4818-a3bc-e0311d4f6bde-metrics-tls\") pod \"dns-operator-744455d44c-4sz55\" (UID: \"b18fa273-31a7-4818-a3bc-e0311d4f6bde\") " pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.751271 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" event={"ID":"d10a20ce-f44b-45b4-b199-759adf792fe0","Type":"ContainerStarted","Data":"1dbc71baa75985fa402a891c0efea19a3d13f7e2e2f90c8d04e4eb2d40736148"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.752595 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b196bf-3941-451e-818c-61af0664a204-serving-cert\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.757598 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f6bb719d-969b-4b39-8495-85a5e91123a6-signing-key\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.758512 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6dcq\" (UniqueName: \"kubernetes.io/projected/6845406f-45aa-4abf-b2ea-729513677ab8-kube-api-access-c6dcq\") pod \"csi-hostpathplugin-rb7rw\" (UID: \"6845406f-45aa-4abf-b2ea-729513677ab8\") " pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.758929 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.764474 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.760610 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5920c5fd-c1ab-4729-8dd1-8df4ee246684-webhook-cert\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.771426 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/409a937d-29d5-426e-9aba-51c8e44b387a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.771826 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-metrics-certs\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.772853 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/da31d090-0806-4ef1-bd92-c64e2b1795d8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.773894 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr625\" (UniqueName: \"kubernetes.io/projected/5920c5fd-c1ab-4729-8dd1-8df4ee246684-kube-api-access-qr625\") pod \"packageserver-d55dfcdfc-hr5b8\" (UID: \"5920c5fd-c1ab-4729-8dd1-8df4ee246684\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.774087 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-stats-auth\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.774219 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/39878b72-74a2-4043-8dd6-b195b7030bfe-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-vp8dm\" (UID: \"39878b72-74a2-4043-8dd6-b195b7030bfe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.779361 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6xxlr"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.795360 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmd6\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-kube-api-access-rqmd6\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.795595 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" event={"ID":"498e7b96-97c7-43f6-ba6d-cbc03ea9e543","Type":"ContainerStarted","Data":"4d86ad38eb37915b53753f6cc15fd590cd8aca240bbaa681028bc174c3558480"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.795641 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" event={"ID":"498e7b96-97c7-43f6-ba6d-cbc03ea9e543","Type":"ContainerStarted","Data":"56b7c759b82feff7246ced904faebe90453ad2fe9e4b67ac7c7624c8358c8caf"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.796067 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.798111 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.799687 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" event={"ID":"aed03db9-cd2b-4aa5-96d4-de0e00e95842","Type":"ContainerStarted","Data":"b884ef9ecb4c5ef30f1481668e9b4b81d861fc9a4c663c4c03f73313edcbfc9c"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.799757 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" event={"ID":"aed03db9-cd2b-4aa5-96d4-de0e00e95842","Type":"ContainerStarted","Data":"2a9688e8476261da4a0d8361f0c61e369894ffb872030144a1a1d32411f24b9b"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.799768 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" event={"ID":"aed03db9-cd2b-4aa5-96d4-de0e00e95842","Type":"ContainerStarted","Data":"9c03c451774bac512ed996d960f56af2a6b959d556a2881168f03a7e80bbbb06"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.819122 4983 patch_prober.go:28] interesting pod/console-operator-58897d9998-8zwnb container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.819208 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" podUID="498e7b96-97c7-43f6-ba6d-cbc03ea9e543" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.820430 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665g6\" (UniqueName: \"kubernetes.io/projected/97b196bf-3941-451e-818c-61af0664a204-kube-api-access-665g6\") pod \"service-ca-operator-777779d784-svs28\" (UID: \"97b196bf-3941-451e-818c-61af0664a204\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.821410 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" event={"ID":"14c4f68a-8367-414b-87f9-3582c6ec9064","Type":"ContainerStarted","Data":"b826dfc2fc4af93baa547bb43205ef75f7e2c0f7803c3f054d1dd4dbb75a4e4c"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.821457 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" event={"ID":"14c4f68a-8367-414b-87f9-3582c6ec9064","Type":"ContainerStarted","Data":"de0d47d0a24ee04b602e3902993bfeea119b10ca2ec701e48ed1d1ab43861104"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.821470 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" event={"ID":"14c4f68a-8367-414b-87f9-3582c6ec9064","Type":"ContainerStarted","Data":"9421551d1f6dfe25b37a7f061975bef87a527d17619ab480a9e1dd124879bbcb"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.823695 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" event={"ID":"2a38d967-78e8-45a1-9093-d24e38d84da7","Type":"ContainerStarted","Data":"1f6daef9f949185fc166bf8dd4a39bdc0ae6755a884ccd5ec196e01757da6276"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.827075 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" event={"ID":"98915ddf-6a6b-4c4e-a8b5-379567bbbf09","Type":"ContainerStarted","Data":"1cda094cb646798437381da0b9458c1836b05eb0bbebf6eade0b1cb066aef936"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.831211 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm"] Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.845263 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ltb5\" (UniqueName: \"kubernetes.io/projected/20acccdd-eff5-4128-a24e-c7c6aa9e4fd9-kube-api-access-5ltb5\") pod \"migrator-59844c95c7-wcs4w\" (UID: \"20acccdd-eff5-4128-a24e-c7c6aa9e4fd9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.845472 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" event={"ID":"a8b5262c-2b08-404a-a884-d5294dcc82be","Type":"ContainerStarted","Data":"d7a9ad89c21a6a094d2a739f815dbde2c81a093be0252e4f1d332a566a1b00b7"} Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.845710 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:31 crc kubenswrapper[4983]: E1125 20:29:31.845776 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.345762532 +0000 UTC m=+153.458295924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.845913 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.846038 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkthl\" (UniqueName: \"kubernetes.io/projected/5092e350-79ed-419f-8e96-d6e5c9430b64-kube-api-access-fkthl\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.846254 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5092e350-79ed-419f-8e96-d6e5c9430b64-certs\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.846338 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5092e350-79ed-419f-8e96-d6e5c9430b64-node-bootstrap-token\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.846373 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09736212-5e5c-42ba-a184-b8b1f0f0a67c-config-volume\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.846431 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ds5\" (UniqueName: \"kubernetes.io/projected/09736212-5e5c-42ba-a184-b8b1f0f0a67c-kube-api-access-m6ds5\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.846487 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09736212-5e5c-42ba-a184-b8b1f0f0a67c-metrics-tls\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: E1125 20:29:31.847290 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.347282541 +0000 UTC m=+153.459815933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.847348 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09736212-5e5c-42ba-a184-b8b1f0f0a67c-config-volume\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.852576 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnqm\" (UniqueName: \"kubernetes.io/projected/f6bb719d-969b-4b39-8495-85a5e91123a6-kube-api-access-znnqm\") pod \"service-ca-9c57cc56f-vd426\" (UID: \"f6bb719d-969b-4b39-8495-85a5e91123a6\") " pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.853445 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.854417 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5092e350-79ed-419f-8e96-d6e5c9430b64-certs\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.858421 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5092e350-79ed-419f-8e96-d6e5c9430b64-node-bootstrap-token\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.860418 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09736212-5e5c-42ba-a184-b8b1f0f0a67c-metrics-tls\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.885324 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-bound-sa-token\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.888997 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljzm\" (UniqueName: \"kubernetes.io/projected/39878b72-74a2-4043-8dd6-b195b7030bfe-kube-api-access-wljzm\") pod \"multus-admission-controller-857f4d67dd-vp8dm\" (UID: \"39878b72-74a2-4043-8dd6-b195b7030bfe\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.925840 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q6h9\" (UniqueName: \"kubernetes.io/projected/4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad-kube-api-access-2q6h9\") pod \"machine-config-operator-74547568cd-2qt5n\" (UID: \"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.946046 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/409a937d-29d5-426e-9aba-51c8e44b387a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sxp9s\" (UID: \"409a937d-29d5-426e-9aba-51c8e44b387a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.949576 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:31 crc kubenswrapper[4983]: E1125 20:29:31.950790 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.45076191 +0000 UTC m=+153.563295332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.957233 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.960418 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rl7\" (UniqueName: \"kubernetes.io/projected/3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91-kube-api-access-r4rl7\") pod \"router-default-5444994796-zql6p\" (UID: \"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91\") " pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.968590 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.986500 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kj68\" (UniqueName: \"kubernetes.io/projected/b18fa273-31a7-4818-a3bc-e0311d4f6bde-kube-api-access-7kj68\") pod \"dns-operator-744455d44c-4sz55\" (UID: \"b18fa273-31a7-4818-a3bc-e0311d4f6bde\") " pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.987124 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.986956 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.993062 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w9cm\" (UniqueName: \"kubernetes.io/projected/da31d090-0806-4ef1-bd92-c64e2b1795d8-kube-api-access-8w9cm\") pod \"olm-operator-6b444d44fb-7gnfb\" (UID: \"da31d090-0806-4ef1-bd92-c64e2b1795d8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:31 crc kubenswrapper[4983]: I1125 20:29:31.995969 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.000839 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.018209 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnlf\" (UniqueName: \"kubernetes.io/projected/ab247bf3-165b-4513-ad09-b33ce8fc15a8-kube-api-access-7hnlf\") pod \"collect-profiles-29401695-55fbx\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.045544 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.049815 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkthl\" (UniqueName: \"kubernetes.io/projected/5092e350-79ed-419f-8e96-d6e5c9430b64-kube-api-access-fkthl\") pod \"machine-config-server-6fm9n\" (UID: \"5092e350-79ed-419f-8e96-d6e5c9430b64\") " pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.057009 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.057849 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.557546424 +0000 UTC m=+153.670079816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.059493 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvg4v"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.068080 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ds5\" (UniqueName: \"kubernetes.io/projected/09736212-5e5c-42ba-a184-b8b1f0f0a67c-kube-api-access-m6ds5\") pod \"dns-default-fc95k\" (UID: \"09736212-5e5c-42ba-a184-b8b1f0f0a67c\") " pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.096196 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.114184 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.118951 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.124379 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vd426" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.157798 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.158144 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.658130028 +0000 UTC m=+153.770663420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.169063 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-srjk5"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.169217 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6fm9n" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.175503 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.214930 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.232169 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.249685 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.263078 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.264971 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.764952163 +0000 UTC m=+153.877485555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.322692 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sw79t"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.332981 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.335766 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.365465 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.366016 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.865980698 +0000 UTC m=+153.978514090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.429150 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.473473 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.473831 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:32.97381683 +0000 UTC m=+154.086350222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.496181 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.507088 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rb7rw"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.533875 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.575407 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.575591 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.075563574 +0000 UTC m=+154.188096966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.575752 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.576225 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.07620986 +0000 UTC m=+154.188743252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: W1125 20:29:32.578768 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bdb6c5b_8666_459e_83c6_a783159a20cb.slice/crio-56737195ab454691542808d097177eaecd4645549c8cde7d594985f16aa2d7eb WatchSource:0}: Error finding container 56737195ab454691542808d097177eaecd4645549c8cde7d594985f16aa2d7eb: Status 404 returned error can't find the container with id 56737195ab454691542808d097177eaecd4645549c8cde7d594985f16aa2d7eb Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.678334 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.678705 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.178681183 +0000 UTC m=+154.291214575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.680782 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.681246 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.181229549 +0000 UTC m=+154.293762941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.754251 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-vp8dm"] Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.782655 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.782804 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.282781617 +0000 UTC m=+154.395315009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.782846 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.783129 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.283122446 +0000 UTC m=+154.395655838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.864739 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" event={"ID":"ec554c34-7720-4354-8639-b7b70a2f8894","Type":"ContainerStarted","Data":"5b16dd5ecaea15b395a3ff4048865e91fcb3863ac5f9845648b2166a3179f98a"} Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.883666 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.884105 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.384084619 +0000 UTC m=+154.496618011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.923747 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" event={"ID":"a8b5262c-2b08-404a-a884-d5294dcc82be","Type":"ContainerStarted","Data":"233c30339bb39b5464c1fe2795220b1e467ab14c051a94f7429c8267b53a918f"} Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.941927 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" event={"ID":"51523753-e43c-4a7d-a3a2-412e6ef40670","Type":"ContainerStarted","Data":"3c5da3bda5679c5f56c2397222a9466ba5ee8d40b7c6a841dfba9471796a58d7"} Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.971854 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g8bfq" event={"ID":"06dff288-ef5e-4a4a-88e5-ce25c216ee5a","Type":"ContainerStarted","Data":"ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6"} Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.972261 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g8bfq" event={"ID":"06dff288-ef5e-4a4a-88e5-ce25c216ee5a","Type":"ContainerStarted","Data":"a4804018aca04da69ef9d4861dee01f19c6efc5a124f905c0c68ce7f931b47ef"} Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.986469 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.988408 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" event={"ID":"69a54a85-daaa-41f3-8590-99dc51245879","Type":"ContainerStarted","Data":"d7c94b5c802009fce404e1d6e834d1a774303e714aa2756fa98b3605dfbeb5dc"} Nov 25 20:29:32 crc kubenswrapper[4983]: E1125 20:29:32.988649 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.488634276 +0000 UTC m=+154.601167668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:32 crc kubenswrapper[4983]: I1125 20:29:32.994283 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6fm9n" event={"ID":"5092e350-79ed-419f-8e96-d6e5c9430b64","Type":"ContainerStarted","Data":"c1ca23eb65a6833ea45e3b701d7fc402ae02ac66aff0b8ca09dc9a2c61ce4584"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.000478 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.004901 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.016692 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" event={"ID":"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5","Type":"ContainerStarted","Data":"0f6f1308a6e17379e503759bb7b14e0826ce869fd4703e01e2d19bcfde300385"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.027683 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" event={"ID":"1fff092b-fe51-487a-a6b5-af6ad677ec38","Type":"ContainerStarted","Data":"289c56d7ce7c8a3558a83e1a3bbd103436a5c059426e516b6e432cbd55b16bee"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.037190 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.040015 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-b2krm" event={"ID":"e842492e-468d-46a1-b4ae-2098daf5e263","Type":"ContainerStarted","Data":"3fa371579f6ed57477349c5b7074ead9a80a359e3af41e3b1c04e65006f240b5"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.040392 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-b2krm" event={"ID":"e842492e-468d-46a1-b4ae-2098daf5e263","Type":"ContainerStarted","Data":"ba3e203c48f8b0e6aef55d72dcea0df272b140b23089f98c26a1326323589395"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.041465 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.045963 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" event={"ID":"567037cb-2605-4e85-9d56-909dab2a8d1d","Type":"ContainerStarted","Data":"9eb698100f3cdfa9ef3dd3cbc8333ae64bd1826b6062c1a7321cedf52c897932"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.046020 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" event={"ID":"567037cb-2605-4e85-9d56-909dab2a8d1d","Type":"ContainerStarted","Data":"c771145014d4e20b1e883c0fa1b1a7479c4389bc6b31d930d50debd7951c7de2"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.047583 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" event={"ID":"9de5f7b4-7675-4e35-a273-8a68c4d127e1","Type":"ContainerStarted","Data":"5c71b7e7875ed3382b320a0c6432504bd2f581c40d6fff6a48f4e8209a5921f5"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.050997 4983 patch_prober.go:28] interesting pod/downloads-7954f5f757-b2krm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.051090 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b2krm" podUID="e842492e-468d-46a1-b4ae-2098daf5e263" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.051301 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.057918 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" event={"ID":"1e36d951-df6e-4b44-a4b5-4aaa3daefe75","Type":"ContainerStarted","Data":"f3533ca86d2b5c9179acdc220461d549e1618f8b827930c5bd9ebb9f56f0fb41"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.057951 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" event={"ID":"1e36d951-df6e-4b44-a4b5-4aaa3daefe75","Type":"ContainerStarted","Data":"015899e8fe3df62ff48e7ab044c12f22cc062a697f7c090dec7759f756ced509"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.058841 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" event={"ID":"61a26a3c-422b-4596-821e-fb0d287ce966","Type":"ContainerStarted","Data":"c6e143cfd42579c70bd1f0698a39cd417130cca232a1b9328cdcfbb688e01eee"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.082202 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" event={"ID":"32b0116d-fe96-4215-a627-49ef66a62147","Type":"ContainerStarted","Data":"71c3d4e6b5d6a255cdc6474440410101a84b9349bce7a31f99824596af8fbd3b"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.086314 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" event={"ID":"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e","Type":"ContainerStarted","Data":"0a30ed757ccb0a53f0ea93b67b2d4c409160ef40e8d1790828ef08404056e621"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.087743 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.088112 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.58809695 +0000 UTC m=+154.700630342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.092178 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.096317 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" event={"ID":"98915ddf-6a6b-4c4e-a8b5-379567bbbf09","Type":"ContainerStarted","Data":"51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.097404 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.104003 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" event={"ID":"50f76fe0-cc37-4a22-bb1a-7df5d6012224","Type":"ContainerStarted","Data":"5aa163679bc5d0a943d64ad4ed76aefc958f3a0f00a546ac1176a07a32c4c771"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.114248 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-srjk5" event={"ID":"85269fe1-85ed-4c9d-9864-068497a85668","Type":"ContainerStarted","Data":"97da58b86d524e47a6111bf6dd014f94e1ea82e9f38254cf266ef41667c27789"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.117319 4983 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6x4tb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.117385 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" podUID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.120412 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" event={"ID":"deeb38fe-7024-47af-94be-9099f96d6cc9","Type":"ContainerStarted","Data":"877ace0b5e18ef7e0571598775d696dfba83eef37c37efb6404f77cd9e25ad27"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.120456 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" event={"ID":"deeb38fe-7024-47af-94be-9099f96d6cc9","Type":"ContainerStarted","Data":"52746cc3a0797ebf3417e8721567e8adb860ddd0a79264e5a44f3d78a871bf28"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.127103 4983 generic.go:334] "Generic (PLEG): container finished" podID="2a38d967-78e8-45a1-9093-d24e38d84da7" containerID="d9d60a8d084e4a7a244f4240e635f44cb5c7f215b18421fb1b85a7f8f9a0cb5d" exitCode=0 Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.127172 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" event={"ID":"2a38d967-78e8-45a1-9093-d24e38d84da7","Type":"ContainerDied","Data":"d9d60a8d084e4a7a244f4240e635f44cb5c7f215b18421fb1b85a7f8f9a0cb5d"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.144907 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" event={"ID":"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad","Type":"ContainerStarted","Data":"e6a5083aba79ae9553dc0e0bf02400af57918582ccba70be2bd3c360249f434f"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.154406 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" event={"ID":"6845406f-45aa-4abf-b2ea-729513677ab8","Type":"ContainerStarted","Data":"cb37efc09a95beba712935d608f4a2be613fb2ae01849d359f33f6a8eece3538"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.185274 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" event={"ID":"d10a20ce-f44b-45b4-b199-759adf792fe0","Type":"ContainerStarted","Data":"15aa319f4cc2213a57086dedd6c32607d2c2bf01e67f39c9517997063e61f77e"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.185732 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.190409 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.192105 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.692084712 +0000 UTC m=+154.804618164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.195750 4983 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9zs6k container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" start-of-body= Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.196067 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" podUID="d10a20ce-f44b-45b4-b199-759adf792fe0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.8:6443/healthz\": dial tcp 10.217.0.8:6443: connect: connection refused" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.196239 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" event={"ID":"e07434b8-dec6-40ac-b297-d1dcff926553","Type":"ContainerStarted","Data":"15c49e8b3b97f7487ed48b8f40935788a63ae7027e150e95f74eaffee578b3d4"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.206802 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" event={"ID":"5ab6fa75-c38e-4ad3-ad08-f391846e6fac","Type":"ContainerStarted","Data":"790b978fdb245e34a176a41b20b8e5d7d57f7894015618621e8c8fec19234439"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.206844 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" event={"ID":"5ab6fa75-c38e-4ad3-ad08-f391846e6fac","Type":"ContainerStarted","Data":"d4296719fec0d6a3f8d13c2e9983d9c114680f05b52d08b240616a80181a4fba"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.207602 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.244086 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" event={"ID":"d11029b9-9bae-4f73-a448-ae8996511256","Type":"ContainerStarted","Data":"29fe51267c9af4cd7c421459b01839d3dda35750f82ab5ad7fdc5a4a9be94792"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.244154 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" event={"ID":"d11029b9-9bae-4f73-a448-ae8996511256","Type":"ContainerStarted","Data":"1a79898f59a01b99fd28a3c3a7dcc46d1c6f9972ef4f5a6ac9ba17b6c106da67"} Nov 25 20:29:33 crc kubenswrapper[4983]: W1125 20:29:33.256714 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20acccdd_eff5_4128_a24e_c7c6aa9e4fd9.slice/crio-dff3445af01727ae39ac822e5fcbe2160e10fb8c6e9936f3ee7adc1fb74e1127 WatchSource:0}: Error finding container dff3445af01727ae39ac822e5fcbe2160e10fb8c6e9936f3ee7adc1fb74e1127: Status 404 returned error can't find the container with id dff3445af01727ae39ac822e5fcbe2160e10fb8c6e9936f3ee7adc1fb74e1127 Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.257014 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zql6p" event={"ID":"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91","Type":"ContainerStarted","Data":"e1651bf418ccff8eabac24e6ea8ee411daa4c923b49b1dc128d81ee02c5fab0d"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.259858 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-svs28"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.272285 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" event={"ID":"3bdb6c5b-8666-459e-83c6-a783159a20cb","Type":"ContainerStarted","Data":"56737195ab454691542808d097177eaecd4645549c8cde7d594985f16aa2d7eb"} Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.291411 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.293513 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.793493517 +0000 UTC m=+154.906026909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.311256 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.340675 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4sz55"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.393172 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.393481 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:33.893470754 +0000 UTC m=+155.006004146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.430657 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vd426"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.500197 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.500640 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.000624218 +0000 UTC m=+155.113157610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.545337 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fc95k"] Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.640080 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.640987 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.140950114 +0000 UTC m=+155.253483506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.671606 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" podStartSLOduration=134.671529848 podStartE2EDuration="2m14.671529848s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:33.669956827 +0000 UTC m=+154.782490219" watchObservedRunningTime="2025-11-25 20:29:33.671529848 +0000 UTC m=+154.784063240" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.718431 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-lbln6" podStartSLOduration=134.718399396 podStartE2EDuration="2m14.718399396s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:33.717526463 +0000 UTC m=+154.830059845" watchObservedRunningTime="2025-11-25 20:29:33.718399396 +0000 UTC m=+154.830932778" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.720888 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.742088 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.742508 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.242488592 +0000 UTC m=+155.355021984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.808783 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rkw88" podStartSLOduration=134.808763814 podStartE2EDuration="2m14.808763814s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:33.776375902 +0000 UTC m=+154.888909304" watchObservedRunningTime="2025-11-25 20:29:33.808763814 +0000 UTC m=+154.921297206" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.836788 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2ttdh" podStartSLOduration=133.836770681 podStartE2EDuration="2m13.836770681s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:33.807119771 +0000 UTC m=+154.919653163" watchObservedRunningTime="2025-11-25 20:29:33.836770681 +0000 UTC m=+154.949304073" Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.844770 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.847133 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.34711651 +0000 UTC m=+155.459649902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.950357 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:33 crc kubenswrapper[4983]: E1125 20:29:33.950996 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.450972439 +0000 UTC m=+155.563505831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:33 crc kubenswrapper[4983]: I1125 20:29:33.957634 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8zwnb" podStartSLOduration=134.957594271 podStartE2EDuration="2m14.957594271s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:33.955116936 +0000 UTC m=+155.067650328" watchObservedRunningTime="2025-11-25 20:29:33.957594271 +0000 UTC m=+155.070127673" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.053221 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.053772 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.553753589 +0000 UTC m=+155.666286981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.154343 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.154743 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.654721193 +0000 UTC m=+155.767254585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.154812 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.155133 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.655121883 +0000 UTC m=+155.767655275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.167746 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zzlhs" podStartSLOduration=135.16772639 podStartE2EDuration="2m15.16772639s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.166128369 +0000 UTC m=+155.278661751" watchObservedRunningTime="2025-11-25 20:29:34.16772639 +0000 UTC m=+155.280259782" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.256529 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.256874 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.756847666 +0000 UTC m=+155.869381058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.301467 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" event={"ID":"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad","Type":"ContainerStarted","Data":"0e0ab20265e9de711da253b2b78fd94f948b2dc7da3b375c5509f2c62eac91aa"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.311932 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-b2krm" podStartSLOduration=134.311915577 podStartE2EDuration="2m14.311915577s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.248330765 +0000 UTC m=+155.360864177" watchObservedRunningTime="2025-11-25 20:29:34.311915577 +0000 UTC m=+155.424448969" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.313656 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-ztngk" podStartSLOduration=134.313649292 podStartE2EDuration="2m14.313649292s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.310943222 +0000 UTC m=+155.423476614" watchObservedRunningTime="2025-11-25 20:29:34.313649292 +0000 UTC m=+155.426182684" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.357623 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.357901 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.857890801 +0000 UTC m=+155.970424183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.369137 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" event={"ID":"50f76fe0-cc37-4a22-bb1a-7df5d6012224","Type":"ContainerStarted","Data":"1bcd13bb77a0531aaa1da9520a219c71c110ae029595c14abd244a075d189d7f"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.370356 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.385303 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" event={"ID":"b18fa273-31a7-4818-a3bc-e0311d4f6bde","Type":"ContainerStarted","Data":"da2af0a4eb577011b57a0e94571a0ad097db105c94a502798bc72433ce223249"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.406224 4983 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvg4v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.406288 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.426726 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" event={"ID":"e07434b8-dec6-40ac-b297-d1dcff926553","Type":"ContainerStarted","Data":"5e1c7310592737266b3e6941e8d08c0fdb86e1ed372768d4e6352dcf8c55490f"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.437435 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xpk5j" podStartSLOduration=134.437418568 podStartE2EDuration="2m14.437418568s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.436239637 +0000 UTC m=+155.548773029" watchObservedRunningTime="2025-11-25 20:29:34.437418568 +0000 UTC m=+155.549951960" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.470074 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.471313 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:34.971287518 +0000 UTC m=+156.083820910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.499525 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" podStartSLOduration=134.499500921 podStartE2EDuration="2m14.499500921s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.49947415 +0000 UTC m=+155.612007542" watchObservedRunningTime="2025-11-25 20:29:34.499500921 +0000 UTC m=+155.612034313" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.516462 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-srjk5" event={"ID":"85269fe1-85ed-4c9d-9864-068497a85668","Type":"ContainerStarted","Data":"7c6a365c4d6243136a715f0992d6bd47c881ce2a67647cab294d19a4d54f643e"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.552281 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" event={"ID":"ec800216-1c1b-4324-a1be-2a0c5dcc6ce5","Type":"ContainerStarted","Data":"3f01bc7e57b99e6a1b7b0f239db47b4168e98e8e15561ea7265ac55f2ea2b6bb"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.571036 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" event={"ID":"409a937d-29d5-426e-9aba-51c8e44b387a","Type":"ContainerStarted","Data":"33564e555abf0023bc8e0c0f876574b27e06001429d33fff06a62b6ee86f7260"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.572620 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.576029 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.076011249 +0000 UTC m=+156.188544701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.588136 4983 generic.go:334] "Generic (PLEG): container finished" podID="32b0116d-fe96-4215-a627-49ef66a62147" containerID="df251ca30b45defb8ca7c3d31dab418ad2b9da1dde4ded52e4b11c749fba098a" exitCode=0 Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.589047 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" event={"ID":"32b0116d-fe96-4215-a627-49ef66a62147","Type":"ContainerDied","Data":"df251ca30b45defb8ca7c3d31dab418ad2b9da1dde4ded52e4b11c749fba098a"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.628972 4983 generic.go:334] "Generic (PLEG): container finished" podID="51523753-e43c-4a7d-a3a2-412e6ef40670" containerID="0bc26230efa1d8e7bfd4caff4053da6a000c3939308f75e285d865757f38e65c" exitCode=0 Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.629887 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" event={"ID":"51523753-e43c-4a7d-a3a2-412e6ef40670","Type":"ContainerDied","Data":"0bc26230efa1d8e7bfd4caff4053da6a000c3939308f75e285d865757f38e65c"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.652824 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fc95k" event={"ID":"09736212-5e5c-42ba-a184-b8b1f0f0a67c","Type":"ContainerStarted","Data":"3e4f53680b497987ab91f7bed07da05453a1383e6012cb1658d5071c9d5f4819"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.673652 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.675365 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.175144174 +0000 UTC m=+156.287677566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.690852 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" event={"ID":"5920c5fd-c1ab-4729-8dd1-8df4ee246684","Type":"ContainerStarted","Data":"ca1b804ffd741b4bbb40ccdef79bab2d4c88b43bbceb15774e35aa79c97b0f8d"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.690923 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.711495 4983 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hr5b8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.711577 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" podUID="5920c5fd-c1ab-4729-8dd1-8df4ee246684" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.716689 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" podStartSLOduration=134.716661163 podStartE2EDuration="2m14.716661163s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.709946708 +0000 UTC m=+155.822480100" watchObservedRunningTime="2025-11-25 20:29:34.716661163 +0000 UTC m=+155.829194555" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.746367 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g8bfq" podStartSLOduration=134.746342354 podStartE2EDuration="2m14.746342354s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.746270152 +0000 UTC m=+155.858803544" watchObservedRunningTime="2025-11-25 20:29:34.746342354 +0000 UTC m=+155.858875746" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.770987 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" event={"ID":"61a26a3c-422b-4596-821e-fb0d287ce966","Type":"ContainerStarted","Data":"6332568dbeb9c9f273fc3027028d23ea639477c87a57cb3d296bfcf0a426e629"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.771095 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.778850 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.780142 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.280129362 +0000 UTC m=+156.392662744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.797784 4983 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qrplb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.797849 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" podUID="61a26a3c-422b-4596-821e-fb0d287ce966" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.819389 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" event={"ID":"39878b72-74a2-4043-8dd6-b195b7030bfe","Type":"ContainerStarted","Data":"ba50be0ef727328bfcdc58668615c71690abd7666c623ebedba9b92e56669238"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.868184 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d2mch" podStartSLOduration=134.868162879 podStartE2EDuration="2m14.868162879s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.82048358 +0000 UTC m=+155.933016972" watchObservedRunningTime="2025-11-25 20:29:34.868162879 +0000 UTC m=+155.980696271" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.880406 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.881199 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.381183478 +0000 UTC m=+156.493716870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.921384 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vd426" event={"ID":"f6bb719d-969b-4b39-8495-85a5e91123a6","Type":"ContainerStarted","Data":"2ba86a6ffb71753c2cacba6a7ae608307a804bb87cf15be94965f64129577b46"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.926838 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" podStartSLOduration=134.926808033 podStartE2EDuration="2m14.926808033s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.915821867 +0000 UTC m=+156.028355289" watchObservedRunningTime="2025-11-25 20:29:34.926808033 +0000 UTC m=+156.039341425" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.950685 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" event={"ID":"1e36d951-df6e-4b44-a4b5-4aaa3daefe75","Type":"ContainerStarted","Data":"ad3224b6e9c385a47adb6a651bb5f62e4339449ff6787488bfe09f6158352d3b"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.952482 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" podStartSLOduration=134.952454829 podStartE2EDuration="2m14.952454829s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:34.950723914 +0000 UTC m=+156.063257306" watchObservedRunningTime="2025-11-25 20:29:34.952454829 +0000 UTC m=+156.064988221" Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.983861 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" event={"ID":"ab247bf3-165b-4513-ad09-b33ce8fc15a8","Type":"ContainerStarted","Data":"b79cc83bc2ac01c7fc9f4ef8010f41f71288a2e24d2a2466e1240bfed6a29a3e"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.983944 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:34 crc kubenswrapper[4983]: E1125 20:29:34.984983 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.484969684 +0000 UTC m=+156.597503076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.994835 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" event={"ID":"97b196bf-3941-451e-818c-61af0664a204","Type":"ContainerStarted","Data":"b181343757c566b3b1d0d90377637998233a92a9a2ff910e780e4c9a73e52011"} Nov 25 20:29:34 crc kubenswrapper[4983]: I1125 20:29:34.997411 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.014270 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6fm9n" event={"ID":"5092e350-79ed-419f-8e96-d6e5c9430b64","Type":"ContainerStarted","Data":"58f2398b7dce02fef73fcd97b8ec79745378058aeb8680ada9a0db8c1a145555"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.020297 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.020385 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.046124 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" event={"ID":"ec554c34-7720-4354-8639-b7b70a2f8894","Type":"ContainerStarted","Data":"97da6d7542dcf05b712186334c735d66d0dff028d5eb6b5c93ad0126ca8854aa"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.066053 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" event={"ID":"20acccdd-eff5-4128-a24e-c7c6aa9e4fd9","Type":"ContainerStarted","Data":"dff3445af01727ae39ac822e5fcbe2160e10fb8c6e9936f3ee7adc1fb74e1127"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.108695 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" podStartSLOduration=135.108670648 podStartE2EDuration="2m15.108670648s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.108543555 +0000 UTC m=+156.221076947" watchObservedRunningTime="2025-11-25 20:29:35.108670648 +0000 UTC m=+156.221204040" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.110689 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.121982 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" event={"ID":"da31d090-0806-4ef1-bd92-c64e2b1795d8","Type":"ContainerStarted","Data":"f77199331b6cb757e0233fd211ed574cd7c8b4e8edd9ae5068fe73e22afd2e7f"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.123678 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.124523 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.624490389 +0000 UTC m=+156.737023781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.137300 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" event={"ID":"1fff092b-fe51-487a-a6b5-af6ad677ec38","Type":"ContainerStarted","Data":"081cbc2bf7c3756d161265e862ede40f52c0c2bb995ee89e94ebd4d754b74064"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.142436 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.144660 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.644640063 +0000 UTC m=+156.757173455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.179724 4983 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7gnfb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.179801 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" podUID="da31d090-0806-4ef1-bd92-c64e2b1795d8" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.189697 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" event={"ID":"3bdb6c5b-8666-459e-83c6-a783159a20cb","Type":"ContainerStarted","Data":"b69d1bd849a08f2a5259e71fbdd688e9523c2ddbbfc5cfadd75c83d02cdeb86d"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.195752 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" event={"ID":"69a54a85-daaa-41f3-8590-99dc51245879","Type":"ContainerStarted","Data":"6b1520062a4b120b370b8e10bf565199bb1c1acbe44a0d073a11d11af7d6cc15"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.196091 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.209848 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" event={"ID":"6f4b9c50-fae5-4a4b-9632-7e7ba8519c0e","Type":"ContainerStarted","Data":"891767ca143ddbcec5f815ca034810ac7e075a6811a8b200cd4a00ebed134efe"} Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.210891 4983 patch_prober.go:28] interesting pod/downloads-7954f5f757-b2krm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.210935 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b2krm" podUID="e842492e-468d-46a1-b4ae-2098daf5e263" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.217410 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-srjk5" podStartSLOduration=7.217386733 podStartE2EDuration="7.217386733s" podCreationTimestamp="2025-11-25 20:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.180523855 +0000 UTC m=+156.293057247" watchObservedRunningTime="2025-11-25 20:29:35.217386733 +0000 UTC m=+156.329920125" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.229350 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.248719 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.250543 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.251600 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.751581621 +0000 UTC m=+156.864115013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.370385 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.371267 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.871254321 +0000 UTC m=+156.983787713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.392707 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b4hzt" podStartSLOduration=135.392619086 podStartE2EDuration="2m15.392619086s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.363270923 +0000 UTC m=+156.475804315" watchObservedRunningTime="2025-11-25 20:29:35.392619086 +0000 UTC m=+156.505152478" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.396164 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6fm9n" podStartSLOduration=6.396151588 podStartE2EDuration="6.396151588s" podCreationTimestamp="2025-11-25 20:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.394952346 +0000 UTC m=+156.507485738" watchObservedRunningTime="2025-11-25 20:29:35.396151588 +0000 UTC m=+156.508684980" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.470149 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" podStartSLOduration=136.47013368 podStartE2EDuration="2m16.47013368s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.468870137 +0000 UTC m=+156.581403529" watchObservedRunningTime="2025-11-25 20:29:35.47013368 +0000 UTC m=+156.582667072" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.474004 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.474254 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:35.974240346 +0000 UTC m=+157.086773728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.553023 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" podStartSLOduration=135.553006853 podStartE2EDuration="2m15.553006853s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.52442962 +0000 UTC m=+156.636963002" watchObservedRunningTime="2025-11-25 20:29:35.553006853 +0000 UTC m=+156.665540245" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.586908 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.587173 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.08715833 +0000 UTC m=+157.199691722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.587773 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-l2gbx" podStartSLOduration=135.587755436 podStartE2EDuration="2m15.587755436s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.586069792 +0000 UTC m=+156.698603184" watchObservedRunningTime="2025-11-25 20:29:35.587755436 +0000 UTC m=+156.700288828" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.685757 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" podStartSLOduration=136.685745552 podStartE2EDuration="2m16.685745552s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.684983752 +0000 UTC m=+156.797517144" watchObservedRunningTime="2025-11-25 20:29:35.685745552 +0000 UTC m=+156.798278944" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.686952 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-59vxf" podStartSLOduration=135.686947073 podStartE2EDuration="2m15.686947073s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.651901212 +0000 UTC m=+156.764434604" watchObservedRunningTime="2025-11-25 20:29:35.686947073 +0000 UTC m=+156.799480465" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.692007 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.692372 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.192358084 +0000 UTC m=+157.304891476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.737891 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" podStartSLOduration=135.737876686 podStartE2EDuration="2m15.737876686s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.735586107 +0000 UTC m=+156.848119499" watchObservedRunningTime="2025-11-25 20:29:35.737876686 +0000 UTC m=+156.850410078" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.794248 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.794576 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.294548749 +0000 UTC m=+157.407082141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.799421 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vd426" podStartSLOduration=135.799407645 podStartE2EDuration="2m15.799407645s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.7988448 +0000 UTC m=+156.911378182" watchObservedRunningTime="2025-11-25 20:29:35.799407645 +0000 UTC m=+156.911941027" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.895275 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.895669 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.395653646 +0000 UTC m=+157.508187038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.939162 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-c8p67" podStartSLOduration=135.939144626 podStartE2EDuration="2m15.939144626s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.892873374 +0000 UTC m=+157.005406766" watchObservedRunningTime="2025-11-25 20:29:35.939144626 +0000 UTC m=+157.051678018" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.941914 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" podStartSLOduration=135.941903467 podStartE2EDuration="2m15.941903467s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:35.938592191 +0000 UTC m=+157.051125583" watchObservedRunningTime="2025-11-25 20:29:35.941903467 +0000 UTC m=+157.054436859" Nov 25 20:29:35 crc kubenswrapper[4983]: I1125 20:29:35.998359 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:35 crc kubenswrapper[4983]: E1125 20:29:35.998675 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.498662662 +0000 UTC m=+157.611196054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.001718 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:36 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:36 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:36 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.001755 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.095710 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zql6p" podStartSLOduration=136.095694483 podStartE2EDuration="2m16.095694483s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.033092717 +0000 UTC m=+157.145626099" watchObservedRunningTime="2025-11-25 20:29:36.095694483 +0000 UTC m=+157.208227875" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.099826 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.100363 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.600324433 +0000 UTC m=+157.712857825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.203235 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.203511 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.703499784 +0000 UTC m=+157.816033166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.240459 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" event={"ID":"b18fa273-31a7-4818-a3bc-e0311d4f6bde","Type":"ContainerStarted","Data":"c9cba01ae9bea9abcfd1e428b3f4f8ed3dbd72cf0bc1b67657cf984598e8459a"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.240501 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" event={"ID":"b18fa273-31a7-4818-a3bc-e0311d4f6bde","Type":"ContainerStarted","Data":"b3335c99860d4095b6639c7f9701977c279e4417153579317ac17a0d0caffe16"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.242223 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" event={"ID":"69a54a85-daaa-41f3-8590-99dc51245879","Type":"ContainerStarted","Data":"f2c0bfb4a5c354548d83a53e67402836c2ef9a2b167b62ef1568765772c7e3b5"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.250816 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" event={"ID":"5920c5fd-c1ab-4729-8dd1-8df4ee246684","Type":"ContainerStarted","Data":"b6fa379c709107f5f33aef2385786f8fd0d1663ca647d42a0221e361690e32e0"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.256423 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" event={"ID":"da31d090-0806-4ef1-bd92-c64e2b1795d8","Type":"ContainerStarted","Data":"e1250f85cb4bdfa623325a037849d38f9d937147fe2d5b10481e2127ae29ceda"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.258773 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vd426" event={"ID":"f6bb719d-969b-4b39-8495-85a5e91123a6","Type":"ContainerStarted","Data":"dd0be0d9f8a3551ec0ba66717ef3bd8ac05c8b66171fbcb78d4de936c8a6627e"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.265918 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" event={"ID":"20acccdd-eff5-4128-a24e-c7c6aa9e4fd9","Type":"ContainerStarted","Data":"5573ed2b69f6473679a8275b307f743cc32a7b73e5fc34bd53e747014a51e333"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.265961 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" event={"ID":"20acccdd-eff5-4128-a24e-c7c6aa9e4fd9","Type":"ContainerStarted","Data":"5439efa5d2df924eaf917e09ed5c6e9d31b55d1a9af719e20633821e7472f4b3"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.279598 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4sz55" podStartSLOduration=136.279582041 podStartE2EDuration="2m16.279582041s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.272955569 +0000 UTC m=+157.385488961" watchObservedRunningTime="2025-11-25 20:29:36.279582041 +0000 UTC m=+157.392115433" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.282326 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" event={"ID":"ab247bf3-165b-4513-ad09-b33ce8fc15a8","Type":"ContainerStarted","Data":"874971330c4615c40b2c77b0a2d79f04760f84ddb70cdf95a968f40aed4dd84a"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.302045 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" event={"ID":"409a937d-29d5-426e-9aba-51c8e44b387a","Type":"ContainerStarted","Data":"41d1dddaca3b0e8f41aa61f95c2b932d652e5b6d13db546903d3037e131a85bd"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.304881 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.305816 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.805793282 +0000 UTC m=+157.918326684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.329770 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7gnfb" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.340762 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fc95k" event={"ID":"09736212-5e5c-42ba-a184-b8b1f0f0a67c","Type":"ContainerStarted","Data":"56c7d69907ecebcb9ccb8b4852a29e0a0b7b0ceb49b82b57cb310418acec8ee0"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.340801 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fc95k" event={"ID":"09736212-5e5c-42ba-a184-b8b1f0f0a67c","Type":"ContainerStarted","Data":"b770150386d988ca3e3d6cd814d03d8cffff23fee5c24e8d04139d5b71a02381"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.341093 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.354727 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wcs4w" podStartSLOduration=136.354709523 podStartE2EDuration="2m16.354709523s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.305610517 +0000 UTC m=+157.418143909" watchObservedRunningTime="2025-11-25 20:29:36.354709523 +0000 UTC m=+157.467242915" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.355103 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sxp9s" podStartSLOduration=136.355098113 podStartE2EDuration="2m16.355098113s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.352830594 +0000 UTC m=+157.465363986" watchObservedRunningTime="2025-11-25 20:29:36.355098113 +0000 UTC m=+157.467631515" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.382084 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r26wk" event={"ID":"1fff092b-fe51-487a-a6b5-af6ad677ec38","Type":"ContainerStarted","Data":"36b1138325a608537420aa21bfc85f5ecf56209c3539f71a7850ddb2cf3b285c"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.406608 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.408015 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:36.908000638 +0000 UTC m=+158.020534030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.438806 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" event={"ID":"51523753-e43c-4a7d-a3a2-412e6ef40670","Type":"ContainerStarted","Data":"b4bf89066c8d273dff7a66a6eaa099bb26d54fc681bad9b1eb94fca59e978c03"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.488746 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zql6p" event={"ID":"3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91","Type":"ContainerStarted","Data":"0626c09205ff81016f4ea424ca940e1ae1f705da478885338753557a7758bf6c"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.513117 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fc95k" podStartSLOduration=7.513086638 podStartE2EDuration="7.513086638s" podCreationTimestamp="2025-11-25 20:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.47620164 +0000 UTC m=+157.588735022" watchObservedRunningTime="2025-11-25 20:29:36.513086638 +0000 UTC m=+157.625620030" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.513201 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.513366 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.013332774 +0000 UTC m=+158.125866166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.513668 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.515427 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" podStartSLOduration=136.515418549 podStartE2EDuration="2m16.515418549s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.513205531 +0000 UTC m=+157.625738923" watchObservedRunningTime="2025-11-25 20:29:36.515418549 +0000 UTC m=+157.627951941" Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.515972 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.015958473 +0000 UTC m=+158.128491865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.533616 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" event={"ID":"39878b72-74a2-4043-8dd6-b195b7030bfe","Type":"ContainerStarted","Data":"286f39292799ea557c971ddd849f852db75232f54a87d56f0449f7021039bb8e"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.533682 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" event={"ID":"39878b72-74a2-4043-8dd6-b195b7030bfe","Type":"ContainerStarted","Data":"18e6d739dfe38dc8bc64464ba44c7540747d05dee56e4d0d3ae033e1664fdbcd"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.566948 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" event={"ID":"2a38d967-78e8-45a1-9093-d24e38d84da7","Type":"ContainerStarted","Data":"1fca1172550ca8b67113b98af72762ec402be5adc993b60b48181febfa5d947d"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.567607 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.569610 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" event={"ID":"6845406f-45aa-4abf-b2ea-729513677ab8","Type":"ContainerStarted","Data":"0e69949ada65dceeb549e9ef524a3223b4cc04c3cb17f9363fe2ab9dbb849518"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.613206 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" event={"ID":"32b0116d-fe96-4215-a627-49ef66a62147","Type":"ContainerStarted","Data":"1131ccc9d4ad030f5db6c26eac895e820cf43f0616d02413abe276afdc1f90ba"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.614349 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.615365 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.115333375 +0000 UTC m=+158.227866767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.628544 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" event={"ID":"e07434b8-dec6-40ac-b297-d1dcff926553","Type":"ContainerStarted","Data":"1ad812f49c8a0fafee57c5357108c657cdf2b821eb14228e70ad5f1229b8f75c"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.629587 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-vp8dm" podStartSLOduration=136.629568134 podStartE2EDuration="2m16.629568134s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.59128416 +0000 UTC m=+157.703817552" watchObservedRunningTime="2025-11-25 20:29:36.629568134 +0000 UTC m=+157.742101526" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.634669 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" event={"ID":"9de5f7b4-7675-4e35-a273-8a68c4d127e1","Type":"ContainerStarted","Data":"c3b9a1ef42ce19b196f0aad7cd26defd6d43a2540fcf53210f3daf849f2b3352"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.636218 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-svs28" event={"ID":"97b196bf-3941-451e-818c-61af0664a204","Type":"ContainerStarted","Data":"31235cf799e6e35ea0acec05c1ad263a51aec00355d53ee7bd902f2cd33ac123"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.650873 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" event={"ID":"4a4e3ed8-ecdd-453f-9d6c-3e87d01e90ad","Type":"ContainerStarted","Data":"5d59dbf89bf7e7580371b96924c52a3d48a5428fe2af15fcfd09fd3f29fa8cef"} Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.658908 4983 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cvg4v container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.659132 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.667109 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" podStartSLOduration=137.667089879 podStartE2EDuration="2m17.667089879s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.666257698 +0000 UTC m=+157.778791110" watchObservedRunningTime="2025-11-25 20:29:36.667089879 +0000 UTC m=+157.779623271" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.668354 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" podStartSLOduration=137.668347642 podStartE2EDuration="2m17.668347642s" podCreationTimestamp="2025-11-25 20:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.628173118 +0000 UTC m=+157.740706510" watchObservedRunningTime="2025-11-25 20:29:36.668347642 +0000 UTC m=+157.780881034" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.676280 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qrplb" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.717482 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2qt5n" podStartSLOduration=136.717454088 podStartE2EDuration="2m16.717454088s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.71713648 +0000 UTC m=+157.829669872" watchObservedRunningTime="2025-11-25 20:29:36.717454088 +0000 UTC m=+157.829987480" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.718247 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.718543 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.218526046 +0000 UTC m=+158.331059438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.822642 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.822871 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.322821546 +0000 UTC m=+158.435354928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.827242 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.830887 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.330863815 +0000 UTC m=+158.443397207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.843852 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fccn" podStartSLOduration=136.843837092 podStartE2EDuration="2m16.843837092s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.771997525 +0000 UTC m=+157.884530917" watchObservedRunningTime="2025-11-25 20:29:36.843837092 +0000 UTC m=+157.956370474" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.853987 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4zs2f"] Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.854921 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.857735 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sw79t" podStartSLOduration=136.857721122 podStartE2EDuration="2m16.857721122s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:36.841190203 +0000 UTC m=+157.953723595" watchObservedRunningTime="2025-11-25 20:29:36.857721122 +0000 UTC m=+157.970254514" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.861297 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.868218 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zs2f"] Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.932411 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.932876 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-utilities\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.932985 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-722g9\" (UniqueName: \"kubernetes.io/projected/33636a92-6a39-4007-b537-94bdfa5c9191-kube-api-access-722g9\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:36 crc kubenswrapper[4983]: I1125 20:29:36.933021 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-catalog-content\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:36 crc kubenswrapper[4983]: E1125 20:29:36.933178 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.433155331 +0000 UTC m=+158.545688723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.006860 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:37 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:37 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:37 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.006924 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.035415 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-722g9\" (UniqueName: \"kubernetes.io/projected/33636a92-6a39-4007-b537-94bdfa5c9191-kube-api-access-722g9\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.035487 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-catalog-content\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.035534 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-utilities\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.035609 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.036033 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.536012644 +0000 UTC m=+158.648546036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.036976 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-catalog-content\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.037049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-utilities\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.037873 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v5mqd"] Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.039569 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.045589 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.056513 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5mqd"] Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.080015 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-722g9\" (UniqueName: \"kubernetes.io/projected/33636a92-6a39-4007-b537-94bdfa5c9191-kube-api-access-722g9\") pod \"community-operators-4zs2f\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.137806 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.138692 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.638669081 +0000 UTC m=+158.751202473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.226201 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.240070 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.240137 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-utilities\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.240186 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgplg\" (UniqueName: \"kubernetes.io/projected/cb2d46db-fa4e-4967-89d2-e6993f05bb90-kube-api-access-sgplg\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.240226 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-catalog-content\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.245143 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.745126607 +0000 UTC m=+158.857659999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.257866 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hrbgw"] Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.258852 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.259315 4983 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hr5b8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.259348 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" podUID="5920c5fd-c1ab-4729-8dd1-8df4ee246684" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.286087 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrbgw"] Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346355 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346528 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-utilities\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346580 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgplg\" (UniqueName: \"kubernetes.io/projected/cb2d46db-fa4e-4967-89d2-e6993f05bb90-kube-api-access-sgplg\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346655 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-catalog-content\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.346694 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.846679816 +0000 UTC m=+158.959213208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346800 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xw8\" (UniqueName: \"kubernetes.io/projected/620f0ca8-67c8-45fc-a41b-5af30115fc1b-kube-api-access-w7xw8\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346880 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-catalog-content\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.346908 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-utilities\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.347018 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-utilities\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.347455 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-catalog-content\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.427376 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgplg\" (UniqueName: \"kubernetes.io/projected/cb2d46db-fa4e-4967-89d2-e6993f05bb90-kube-api-access-sgplg\") pod \"certified-operators-v5mqd\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.451217 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-utilities\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.451284 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.451322 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-catalog-content\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.451352 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xw8\" (UniqueName: \"kubernetes.io/projected/620f0ca8-67c8-45fc-a41b-5af30115fc1b-kube-api-access-w7xw8\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.451963 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-utilities\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.452198 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:37.952188427 +0000 UTC m=+159.064721809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.452502 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-catalog-content\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.467660 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rs4kr"] Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.468641 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.513638 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xw8\" (UniqueName: \"kubernetes.io/projected/620f0ca8-67c8-45fc-a41b-5af30115fc1b-kube-api-access-w7xw8\") pod \"community-operators-hrbgw\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.514706 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rs4kr"] Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.552169 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.552389 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b97\" (UniqueName: \"kubernetes.io/projected/53324730-9a91-42c0-8d11-9f6789b9deeb-kube-api-access-66b97\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.552519 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-catalog-content\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.552581 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-utilities\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.552740 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.052712389 +0000 UTC m=+159.165245781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.605069 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.657274 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.657719 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-catalog-content\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.657748 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-utilities\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.657781 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66b97\" (UniqueName: \"kubernetes.io/projected/53324730-9a91-42c0-8d11-9f6789b9deeb-kube-api-access-66b97\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.659783 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-catalog-content\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.660164 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-utilities\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.660608 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.160591482 +0000 UTC m=+159.273124874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.682870 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.740892 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66b97\" (UniqueName: \"kubernetes.io/projected/53324730-9a91-42c0-8d11-9f6789b9deeb-kube-api-access-66b97\") pod \"certified-operators-rs4kr\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.760523 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.761063 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.261042262 +0000 UTC m=+159.373575654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.767487 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" event={"ID":"32b0116d-fe96-4215-a627-49ef66a62147","Type":"ContainerStarted","Data":"1cb49d6419c5a545c56dcadc84657aad792575bd5642948d88d0c9a6644647a7"} Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.789073 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" event={"ID":"6845406f-45aa-4abf-b2ea-729513677ab8","Type":"ContainerStarted","Data":"3c8c3f0136ae64a92fbe755857fe7b5dafe8deb733fe413f1804f1a5738cd9af"} Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.798386 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.866684 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.908306 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.408281377 +0000 UTC m=+159.520814759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.969273 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.969719 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.469695133 +0000 UTC m=+159.582228525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.969894 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:37 crc kubenswrapper[4983]: E1125 20:29:37.970259 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.470247107 +0000 UTC m=+159.582780499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:37 crc kubenswrapper[4983]: I1125 20:29:37.991328 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zs2f"] Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.023810 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:38 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:38 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:38 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.024312 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.087708 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.088282 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.588254014 +0000 UTC m=+159.700787406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.189039 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.189307 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.689296029 +0000 UTC m=+159.801829421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.227793 4983 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-c94zn container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.227841 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" podUID="2a38d967-78e8-45a1-9093-d24e38d84da7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.229655 4983 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-c94zn container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.229723 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" podUID="2a38d967-78e8-45a1-9093-d24e38d84da7" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.291503 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.291852 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.791816512 +0000 UTC m=+159.904349904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.291886 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.292210 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.792201122 +0000 UTC m=+159.904734524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.334984 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrbgw"] Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.393173 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.393948 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.893932916 +0000 UTC m=+160.006466308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.421183 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v5mqd"] Nov 25 20:29:38 crc kubenswrapper[4983]: W1125 20:29:38.460440 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb2d46db_fa4e_4967_89d2_e6993f05bb90.slice/crio-30c63b5677bc5e22a9ee8aac4af23caef14d68ea8eec0818b7839d0b138a41f9 WatchSource:0}: Error finding container 30c63b5677bc5e22a9ee8aac4af23caef14d68ea8eec0818b7839d0b138a41f9: Status 404 returned error can't find the container with id 30c63b5677bc5e22a9ee8aac4af23caef14d68ea8eec0818b7839d0b138a41f9 Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.495006 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.495324 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:38.99531228 +0000 UTC m=+160.107845672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.509120 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rs4kr"] Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.512498 4983 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hr5b8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 20:29:38 crc kubenswrapper[4983]: [+]log ok Nov 25 20:29:38 crc kubenswrapper[4983]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Nov 25 20:29:38 crc kubenswrapper[4983]: [-]poststarthook/max-in-flight-filter failed: reason withheld Nov 25 20:29:38 crc kubenswrapper[4983]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Nov 25 20:29:38 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.512546 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" podUID="5920c5fd-c1ab-4729-8dd1-8df4ee246684" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.543801 4983 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.596350 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.596464 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:39.096441017 +0000 UTC m=+160.208974409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.596663 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.596948 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 20:29:39.09693758 +0000 UTC m=+160.209470972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gznhv" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.658022 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-c94zn" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.697696 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:38 crc kubenswrapper[4983]: E1125 20:29:38.698094 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 20:29:39.198078798 +0000 UTC m=+160.310612190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.770372 4983 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T20:29:38.54382513Z","Handler":null,"Name":""} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.780829 4983 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.780875 4983 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.798208 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerStarted","Data":"2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.798287 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerStarted","Data":"5559a1da7bfa96b611a9f0935e747c8c0bc6d4a146433b986f38e7f8f2b2b0ae"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.799004 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.801730 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.803223 4983 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.803266 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.805840 4983 generic.go:334] "Generic (PLEG): container finished" podID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerID="48eaf7ff90089f40e6c50970c0424e475bd86b7b4b32aa979600ef0acd1c4998" exitCode=0 Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.806314 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5mqd" event={"ID":"cb2d46db-fa4e-4967-89d2-e6993f05bb90","Type":"ContainerDied","Data":"48eaf7ff90089f40e6c50970c0424e475bd86b7b4b32aa979600ef0acd1c4998"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.806349 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5mqd" event={"ID":"cb2d46db-fa4e-4967-89d2-e6993f05bb90","Type":"ContainerStarted","Data":"30c63b5677bc5e22a9ee8aac4af23caef14d68ea8eec0818b7839d0b138a41f9"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.813891 4983 generic.go:334] "Generic (PLEG): container finished" podID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerID="9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6" exitCode=0 Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.813949 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrbgw" event={"ID":"620f0ca8-67c8-45fc-a41b-5af30115fc1b","Type":"ContainerDied","Data":"9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.813976 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrbgw" event={"ID":"620f0ca8-67c8-45fc-a41b-5af30115fc1b","Type":"ContainerStarted","Data":"136e5584b7f00d212c3d7bf8e8e0d3dafcf9ea39e27541dbb26b6fd5efa80080"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.821129 4983 generic.go:334] "Generic (PLEG): container finished" podID="33636a92-6a39-4007-b537-94bdfa5c9191" containerID="74d2d5b391121cfdfba611b204473c80c08cfbdd65392bb5e6beec12b8783c4f" exitCode=0 Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.821239 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zs2f" event={"ID":"33636a92-6a39-4007-b537-94bdfa5c9191","Type":"ContainerDied","Data":"74d2d5b391121cfdfba611b204473c80c08cfbdd65392bb5e6beec12b8783c4f"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.821293 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zs2f" event={"ID":"33636a92-6a39-4007-b537-94bdfa5c9191","Type":"ContainerStarted","Data":"d1e9e5f0d44cd848ee2805bd316669e4b312fb0a4507ec40677ab3ca391c687d"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.853427 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" event={"ID":"6845406f-45aa-4abf-b2ea-729513677ab8","Type":"ContainerStarted","Data":"f736acc3c99d1bf805a913b4f95d9809664e91c379fe20d92af6d93e11cb71de"} Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.917058 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gznhv\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:38 crc kubenswrapper[4983]: I1125 20:29:38.976809 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.002239 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:39 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:39 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:39 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.002347 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.002764 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.023043 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.235056 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xzpql"] Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.238575 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.241973 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.249054 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzpql"] Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.280437 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gznhv"] Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.310693 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-catalog-content\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.310775 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-utilities\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.310820 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zsqk\" (UniqueName: \"kubernetes.io/projected/861534ba-185f-47e0-a0dd-4ce6e14c80ca-kube-api-access-8zsqk\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.412605 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zsqk\" (UniqueName: \"kubernetes.io/projected/861534ba-185f-47e0-a0dd-4ce6e14c80ca-kube-api-access-8zsqk\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.412690 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-catalog-content\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.412753 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-utilities\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.414007 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-utilities\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.414793 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-catalog-content\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.448354 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zsqk\" (UniqueName: \"kubernetes.io/projected/861534ba-185f-47e0-a0dd-4ce6e14c80ca-kube-api-access-8zsqk\") pod \"redhat-marketplace-xzpql\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.566032 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.629984 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.631864 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m997t"] Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.636900 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.638686 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m997t"] Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.720065 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-catalog-content\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.720237 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-utilities\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.720297 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2b7\" (UniqueName: \"kubernetes.io/projected/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-kube-api-access-qh2b7\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.820867 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-utilities\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.820920 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh2b7\" (UniqueName: \"kubernetes.io/projected/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-kube-api-access-qh2b7\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.820960 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-catalog-content\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.821411 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-catalog-content\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.821690 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-utilities\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.860289 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh2b7\" (UniqueName: \"kubernetes.io/projected/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-kube-api-access-qh2b7\") pod \"redhat-marketplace-m997t\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.864334 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" event={"ID":"cb434a7b-12ca-4505-b66c-5d5bf4178d12","Type":"ContainerStarted","Data":"5eb054a66a1b4fcdd5b233d61cc8cdbe6eb54b449ff848fe77f43f6c0f7cf82d"} Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.864383 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" event={"ID":"cb434a7b-12ca-4505-b66c-5d5bf4178d12","Type":"ContainerStarted","Data":"1f3be52ae6268a860d138004a4046196a4fb8ccc666c6a2441a6514ea6df9cd5"} Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.865594 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.892518 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" event={"ID":"6845406f-45aa-4abf-b2ea-729513677ab8","Type":"ContainerStarted","Data":"dbf84c7dc373fc01fe99c50d22a5a2202a581bab7e52997cee07a6f46de342f4"} Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.896209 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzpql"] Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.908718 4983 generic.go:334] "Generic (PLEG): container finished" podID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerID="2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c" exitCode=0 Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.908781 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerDied","Data":"2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c"} Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.927660 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.927713 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.949858 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" podStartSLOduration=139.949827531 podStartE2EDuration="2m19.949827531s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:39.904168175 +0000 UTC m=+161.016701567" watchObservedRunningTime="2025-11-25 20:29:39.949827531 +0000 UTC m=+161.062360923" Nov 25 20:29:39 crc kubenswrapper[4983]: I1125 20:29:39.962232 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.008883 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:40 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:40 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:40 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.008939 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.019439 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rb7rw" podStartSLOduration=12.019416389 podStartE2EDuration="12.019416389s" podCreationTimestamp="2025-11-25 20:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:39.954713058 +0000 UTC m=+161.067246460" watchObservedRunningTime="2025-11-25 20:29:40.019416389 +0000 UTC m=+161.131949771" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.019627 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ss2qz"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.021238 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.037518 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ss2qz"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.039870 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.129019 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-utilities\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.129059 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-catalog-content\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.129097 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbppk\" (UniqueName: \"kubernetes.io/projected/92386321-ac04-4379-b4cb-7111d7328dad-kube-api-access-hbppk\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.231176 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-utilities\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.231222 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-catalog-content\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.231259 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbppk\" (UniqueName: \"kubernetes.io/projected/92386321-ac04-4379-b4cb-7111d7328dad-kube-api-access-hbppk\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.232111 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-utilities\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.232132 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-catalog-content\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.250295 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ws5rh"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.251796 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.274065 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbppk\" (UniqueName: \"kubernetes.io/projected/92386321-ac04-4379-b4cb-7111d7328dad-kube-api-access-hbppk\") pod \"redhat-operators-ss2qz\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.332429 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv942\" (UniqueName: \"kubernetes.io/projected/2cc15a77-913d-42f5-90b8-116f12bcf87d-kube-api-access-rv942\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.332945 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-catalog-content\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.332971 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-utilities\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.351013 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.385749 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ws5rh"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.434516 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv942\" (UniqueName: \"kubernetes.io/projected/2cc15a77-913d-42f5-90b8-116f12bcf87d-kube-api-access-rv942\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.434768 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-catalog-content\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.434796 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-utilities\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.435605 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-catalog-content\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.435648 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-utilities\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.455034 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv942\" (UniqueName: \"kubernetes.io/projected/2cc15a77-913d-42f5-90b8-116f12bcf87d-kube-api-access-rv942\") pod \"redhat-operators-ws5rh\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.526382 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m997t"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.601763 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.867018 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ss2qz"] Nov 25 20:29:40 crc kubenswrapper[4983]: W1125 20:29:40.874363 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92386321_ac04_4379_b4cb_7111d7328dad.slice/crio-ba835cef360443954d0d986e5d5baced17cbe0e13e16b1879ac78c52f641bc55 WatchSource:0}: Error finding container ba835cef360443954d0d986e5d5baced17cbe0e13e16b1879ac78c52f641bc55: Status 404 returned error can't find the container with id ba835cef360443954d0d986e5d5baced17cbe0e13e16b1879ac78c52f641bc55 Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.921683 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.922712 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.925661 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.925958 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.929854 4983 generic.go:334] "Generic (PLEG): container finished" podID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerID="806222fd00baf55833ca14dd0bf4796bf276c267ed0e488d7388eecb6b4afb8c" exitCode=0 Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.930909 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzpql" event={"ID":"861534ba-185f-47e0-a0dd-4ce6e14c80ca","Type":"ContainerDied","Data":"806222fd00baf55833ca14dd0bf4796bf276c267ed0e488d7388eecb6b4afb8c"} Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.931002 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzpql" event={"ID":"861534ba-185f-47e0-a0dd-4ce6e14c80ca","Type":"ContainerStarted","Data":"1b80c547e4752d0352a19f60a63d348f35d1dbe22507412b97d00cd5461e761a"} Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.934090 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ss2qz" event={"ID":"92386321-ac04-4379-b4cb-7111d7328dad","Type":"ContainerStarted","Data":"ba835cef360443954d0d986e5d5baced17cbe0e13e16b1879ac78c52f641bc55"} Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.936073 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.937775 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerStarted","Data":"ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be"} Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.950293 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerStarted","Data":"53afbae54defe7e6731e01fed5cecb1d4b001167e692ff99301fee8175bed3ee"} Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.960593 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27051014-f273-483d-bc3f-67425512b56d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:40 crc kubenswrapper[4983]: I1125 20:29:40.960745 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27051014-f273-483d-bc3f-67425512b56d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.003368 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:41 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:41 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:41 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.003517 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.065385 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27051014-f273-483d-bc3f-67425512b56d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.065515 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27051014-f273-483d-bc3f-67425512b56d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.068509 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27051014-f273-483d-bc3f-67425512b56d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.083310 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ws5rh"] Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.097100 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27051014-f273-483d-bc3f-67425512b56d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.243563 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.243640 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.246006 4983 patch_prober.go:28] interesting pod/console-f9d7485db-g8bfq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.246059 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g8bfq" podUID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.266093 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.320718 4983 patch_prober.go:28] interesting pod/downloads-7954f5f757-b2krm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.320787 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-b2krm" podUID="e842492e-468d-46a1-b4ae-2098daf5e263" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.321687 4983 patch_prober.go:28] interesting pod/downloads-7954f5f757-b2krm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.321771 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-b2krm" podUID="e842492e-468d-46a1-b4ae-2098daf5e263" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.338350 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.338406 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.349935 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.380130 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.380630 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.388309 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.500032 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.635661 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.956097 4983 generic.go:334] "Generic (PLEG): container finished" podID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerID="8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07" exitCode=0 Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.956440 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws5rh" event={"ID":"2cc15a77-913d-42f5-90b8-116f12bcf87d","Type":"ContainerDied","Data":"8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07"} Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.956470 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws5rh" event={"ID":"2cc15a77-913d-42f5-90b8-116f12bcf87d","Type":"ContainerStarted","Data":"d0dce9e4f8bb8269ea36daa89fda8eca795ee27338654b29519117df178d7e74"} Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.961770 4983 generic.go:334] "Generic (PLEG): container finished" podID="92386321-ac04-4379-b4cb-7111d7328dad" containerID="280a597ef8546dae29266f3b11317dd657ec62470aa5cc2efcdcaedd35c23b6a" exitCode=0 Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.961983 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ss2qz" event={"ID":"92386321-ac04-4379-b4cb-7111d7328dad","Type":"ContainerDied","Data":"280a597ef8546dae29266f3b11317dd657ec62470aa5cc2efcdcaedd35c23b6a"} Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.972680 4983 generic.go:334] "Generic (PLEG): container finished" podID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerID="ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be" exitCode=0 Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.972756 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerDied","Data":"ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be"} Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.982546 4983 generic.go:334] "Generic (PLEG): container finished" podID="ab247bf3-165b-4513-ad09-b33ce8fc15a8" containerID="874971330c4615c40b2c77b0a2d79f04760f84ddb70cdf95a968f40aed4dd84a" exitCode=0 Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.982659 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" event={"ID":"ab247bf3-165b-4513-ad09-b33ce8fc15a8","Type":"ContainerDied","Data":"874971330c4615c40b2c77b0a2d79f04760f84ddb70cdf95a968f40aed4dd84a"} Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.985917 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"27051014-f273-483d-bc3f-67425512b56d","Type":"ContainerStarted","Data":"9ebf032243aeb821f286dcc5671f9b8f770c8a20c7c2791b91c53e760be1e951"} Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.991276 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6xxlr" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.994101 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hr5b8" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.996174 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:41 crc kubenswrapper[4983]: I1125 20:29:41.997437 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j79zm" Nov 25 20:29:42 crc kubenswrapper[4983]: I1125 20:29:42.005615 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:42 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:42 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:42 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:42 crc kubenswrapper[4983]: I1125 20:29:42.005681 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:42 crc kubenswrapper[4983]: I1125 20:29:42.622421 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:42 crc kubenswrapper[4983]: I1125 20:29:42.631073 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/badc9ffd-b860-4ebb-a59f-044def6963d4-metrics-certs\") pod \"network-metrics-daemon-59l9r\" (UID: \"badc9ffd-b860-4ebb-a59f-044def6963d4\") " pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:42 crc kubenswrapper[4983]: I1125 20:29:42.820617 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-59l9r" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.013279 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:43 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:43 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:43 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.013766 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.045730 4983 generic.go:334] "Generic (PLEG): container finished" podID="27051014-f273-483d-bc3f-67425512b56d" containerID="20f3f0ad3f1323207e9e0b8b68024f918f0fac2895414e2d57976699653e6a83" exitCode=0 Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.046001 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"27051014-f273-483d-bc3f-67425512b56d","Type":"ContainerDied","Data":"20f3f0ad3f1323207e9e0b8b68024f918f0fac2895414e2d57976699653e6a83"} Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.172934 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-59l9r"] Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.438149 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.566323 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hnlf\" (UniqueName: \"kubernetes.io/projected/ab247bf3-165b-4513-ad09-b33ce8fc15a8-kube-api-access-7hnlf\") pod \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.566404 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab247bf3-165b-4513-ad09-b33ce8fc15a8-secret-volume\") pod \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.566432 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab247bf3-165b-4513-ad09-b33ce8fc15a8-config-volume\") pod \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\" (UID: \"ab247bf3-165b-4513-ad09-b33ce8fc15a8\") " Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.567685 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab247bf3-165b-4513-ad09-b33ce8fc15a8-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab247bf3-165b-4513-ad09-b33ce8fc15a8" (UID: "ab247bf3-165b-4513-ad09-b33ce8fc15a8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.583051 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab247bf3-165b-4513-ad09-b33ce8fc15a8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab247bf3-165b-4513-ad09-b33ce8fc15a8" (UID: "ab247bf3-165b-4513-ad09-b33ce8fc15a8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.590877 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab247bf3-165b-4513-ad09-b33ce8fc15a8-kube-api-access-7hnlf" (OuterVolumeSpecName: "kube-api-access-7hnlf") pod "ab247bf3-165b-4513-ad09-b33ce8fc15a8" (UID: "ab247bf3-165b-4513-ad09-b33ce8fc15a8"). InnerVolumeSpecName "kube-api-access-7hnlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.667509 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hnlf\" (UniqueName: \"kubernetes.io/projected/ab247bf3-165b-4513-ad09-b33ce8fc15a8-kube-api-access-7hnlf\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.667575 4983 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab247bf3-165b-4513-ad09-b33ce8fc15a8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:43 crc kubenswrapper[4983]: I1125 20:29:43.667588 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab247bf3-165b-4513-ad09-b33ce8fc15a8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.001937 4983 patch_prober.go:28] interesting pod/router-default-5444994796-zql6p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 20:29:44 crc kubenswrapper[4983]: [-]has-synced failed: reason withheld Nov 25 20:29:44 crc kubenswrapper[4983]: [+]process-running ok Nov 25 20:29:44 crc kubenswrapper[4983]: healthz check failed Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.002019 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zql6p" podUID="3a6d160b-65e5-4c6d-bc1c-4c24a7b84a91" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.131115 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" event={"ID":"ab247bf3-165b-4513-ad09-b33ce8fc15a8","Type":"ContainerDied","Data":"b79cc83bc2ac01c7fc9f4ef8010f41f71288a2e24d2a2466e1240bfed6a29a3e"} Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.131203 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b79cc83bc2ac01c7fc9f4ef8010f41f71288a2e24d2a2466e1240bfed6a29a3e" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.131333 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.153697 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-59l9r" event={"ID":"badc9ffd-b860-4ebb-a59f-044def6963d4","Type":"ContainerStarted","Data":"eaf3fff4b2694f852318a7b9cc0642f77f3174ecac99823cc50e741cd066d0f0"} Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.367028 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 20:29:44 crc kubenswrapper[4983]: E1125 20:29:44.367461 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab247bf3-165b-4513-ad09-b33ce8fc15a8" containerName="collect-profiles" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.367476 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab247bf3-165b-4513-ad09-b33ce8fc15a8" containerName="collect-profiles" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.367610 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab247bf3-165b-4513-ad09-b33ce8fc15a8" containerName="collect-profiles" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.368205 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.379322 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.379670 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.391028 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.485540 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ab40166-22cf-41ce-92d7-8e0205809813-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.485607 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ab40166-22cf-41ce-92d7-8e0205809813-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.589490 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ab40166-22cf-41ce-92d7-8e0205809813-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.590328 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ab40166-22cf-41ce-92d7-8e0205809813-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.590151 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ab40166-22cf-41ce-92d7-8e0205809813-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.646852 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ab40166-22cf-41ce-92d7-8e0205809813-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.648120 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.719080 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.793268 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27051014-f273-483d-bc3f-67425512b56d-kubelet-dir\") pod \"27051014-f273-483d-bc3f-67425512b56d\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.793440 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27051014-f273-483d-bc3f-67425512b56d-kube-api-access\") pod \"27051014-f273-483d-bc3f-67425512b56d\" (UID: \"27051014-f273-483d-bc3f-67425512b56d\") " Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.796476 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27051014-f273-483d-bc3f-67425512b56d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "27051014-f273-483d-bc3f-67425512b56d" (UID: "27051014-f273-483d-bc3f-67425512b56d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.799030 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27051014-f273-483d-bc3f-67425512b56d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "27051014-f273-483d-bc3f-67425512b56d" (UID: "27051014-f273-483d-bc3f-67425512b56d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.895847 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27051014-f273-483d-bc3f-67425512b56d-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:44 crc kubenswrapper[4983]: I1125 20:29:44.895883 4983 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27051014-f273-483d-bc3f-67425512b56d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.000650 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.008829 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zql6p" Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.209872 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"27051014-f273-483d-bc3f-67425512b56d","Type":"ContainerDied","Data":"9ebf032243aeb821f286dcc5671f9b8f770c8a20c7c2791b91c53e760be1e951"} Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.209996 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebf032243aeb821f286dcc5671f9b8f770c8a20c7c2791b91c53e760be1e951" Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.210095 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.239474 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-59l9r" event={"ID":"badc9ffd-b860-4ebb-a59f-044def6963d4","Type":"ContainerStarted","Data":"7b31ef3fbdea13fe61b1affee0444c85d4fb20af0e50ebe5b1e7a8f2eb33aa30"} Nov 25 20:29:45 crc kubenswrapper[4983]: I1125 20:29:45.559027 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 20:29:46 crc kubenswrapper[4983]: I1125 20:29:46.253485 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ab40166-22cf-41ce-92d7-8e0205809813","Type":"ContainerStarted","Data":"de69fc91a0aeafea299deea2fbfb4111856c1d156e83d158c86d24890da41845"} Nov 25 20:29:46 crc kubenswrapper[4983]: I1125 20:29:46.261096 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-59l9r" event={"ID":"badc9ffd-b860-4ebb-a59f-044def6963d4","Type":"ContainerStarted","Data":"de2b4e19e82cf2c2852bac6c0e227e78e115b3021fbafe224fc89d90a7fd6a8e"} Nov 25 20:29:46 crc kubenswrapper[4983]: I1125 20:29:46.290023 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-59l9r" podStartSLOduration=146.290005891 podStartE2EDuration="2m26.290005891s" podCreationTimestamp="2025-11-25 20:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:29:46.286571301 +0000 UTC m=+167.399104703" watchObservedRunningTime="2025-11-25 20:29:46.290005891 +0000 UTC m=+167.402539283" Nov 25 20:29:47 crc kubenswrapper[4983]: I1125 20:29:47.180871 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fc95k" Nov 25 20:29:47 crc kubenswrapper[4983]: I1125 20:29:47.292033 4983 generic.go:334] "Generic (PLEG): container finished" podID="3ab40166-22cf-41ce-92d7-8e0205809813" containerID="5291e5342498ad9c61b1ad7e2345d54f1ca5dcf35b0311edec48fd57a295e414" exitCode=0 Nov 25 20:29:47 crc kubenswrapper[4983]: I1125 20:29:47.292167 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ab40166-22cf-41ce-92d7-8e0205809813","Type":"ContainerDied","Data":"5291e5342498ad9c61b1ad7e2345d54f1ca5dcf35b0311edec48fd57a295e414"} Nov 25 20:29:51 crc kubenswrapper[4983]: I1125 20:29:51.249144 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:51 crc kubenswrapper[4983]: I1125 20:29:51.254152 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:29:51 crc kubenswrapper[4983]: I1125 20:29:51.325493 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-b2krm" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.353659 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.383319 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"3ab40166-22cf-41ce-92d7-8e0205809813","Type":"ContainerDied","Data":"de69fc91a0aeafea299deea2fbfb4111856c1d156e83d158c86d24890da41845"} Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.383368 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de69fc91a0aeafea299deea2fbfb4111856c1d156e83d158c86d24890da41845" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.383391 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.508492 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ab40166-22cf-41ce-92d7-8e0205809813-kube-api-access\") pod \"3ab40166-22cf-41ce-92d7-8e0205809813\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.508798 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ab40166-22cf-41ce-92d7-8e0205809813-kubelet-dir\") pod \"3ab40166-22cf-41ce-92d7-8e0205809813\" (UID: \"3ab40166-22cf-41ce-92d7-8e0205809813\") " Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.509344 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ab40166-22cf-41ce-92d7-8e0205809813-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ab40166-22cf-41ce-92d7-8e0205809813" (UID: "3ab40166-22cf-41ce-92d7-8e0205809813"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.518810 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab40166-22cf-41ce-92d7-8e0205809813-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ab40166-22cf-41ce-92d7-8e0205809813" (UID: "3ab40166-22cf-41ce-92d7-8e0205809813"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.611701 4983 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ab40166-22cf-41ce-92d7-8e0205809813-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:55 crc kubenswrapper[4983]: I1125 20:29:55.611771 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ab40166-22cf-41ce-92d7-8e0205809813-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:29:58 crc kubenswrapper[4983]: I1125 20:29:58.983845 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.132070 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8"] Nov 25 20:30:00 crc kubenswrapper[4983]: E1125 20:30:00.132653 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab40166-22cf-41ce-92d7-8e0205809813" containerName="pruner" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.132668 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab40166-22cf-41ce-92d7-8e0205809813" containerName="pruner" Nov 25 20:30:00 crc kubenswrapper[4983]: E1125 20:30:00.132686 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27051014-f273-483d-bc3f-67425512b56d" containerName="pruner" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.132694 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="27051014-f273-483d-bc3f-67425512b56d" containerName="pruner" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.132825 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="27051014-f273-483d-bc3f-67425512b56d" containerName="pruner" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.132839 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab40166-22cf-41ce-92d7-8e0205809813" containerName="pruner" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.133251 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.135088 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.136085 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.153509 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8"] Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.283659 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-config-volume\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.283726 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-secret-volume\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.283757 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2l9c\" (UniqueName: \"kubernetes.io/projected/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-kube-api-access-r2l9c\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.384657 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-config-volume\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.384710 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-secret-volume\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.384736 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2l9c\" (UniqueName: \"kubernetes.io/projected/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-kube-api-access-r2l9c\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.385610 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-config-volume\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.394764 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-secret-volume\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.402109 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2l9c\" (UniqueName: \"kubernetes.io/projected/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-kube-api-access-r2l9c\") pod \"collect-profiles-29401710-jzmm8\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:00 crc kubenswrapper[4983]: I1125 20:30:00.498013 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:07 crc kubenswrapper[4983]: I1125 20:30:07.691042 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 20:30:09 crc kubenswrapper[4983]: I1125 20:30:09.927853 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:30:09 crc kubenswrapper[4983]: I1125 20:30:09.928456 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.447442 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.448471 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgplg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-v5mqd_openshift-marketplace(cb2d46db-fa4e-4967-89d2-e6993f05bb90): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.450574 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-v5mqd" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.470664 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-v5mqd" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" Nov 25 20:30:11 crc kubenswrapper[4983]: I1125 20:30:11.522014 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-fd4hc" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.525368 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.525513 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66b97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rs4kr_openshift-marketplace(53324730-9a91-42c0-8d11-9f6789b9deeb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.526732 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rs4kr" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.548868 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.549004 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qh2b7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m997t_openshift-marketplace(bafc524a-ba74-4b1b-9fdb-2054db1c2a4d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.551753 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m997t" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.605024 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.605255 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-722g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4zs2f_openshift-marketplace(33636a92-6a39-4007-b537-94bdfa5c9191): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 20:30:11 crc kubenswrapper[4983]: E1125 20:30:11.606693 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4zs2f" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" Nov 25 20:30:11 crc kubenswrapper[4983]: I1125 20:30:11.917899 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8"] Nov 25 20:30:11 crc kubenswrapper[4983]: W1125 20:30:11.956113 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbad7ed5_4e2f_4c15_98f6_88b58a937f18.slice/crio-285cb583ae883947b7f80df67d29371fd89d962353b709f7add1ac53541dc631 WatchSource:0}: Error finding container 285cb583ae883947b7f80df67d29371fd89d962353b709f7add1ac53541dc631: Status 404 returned error can't find the container with id 285cb583ae883947b7f80df67d29371fd89d962353b709f7add1ac53541dc631 Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.477145 4983 generic.go:334] "Generic (PLEG): container finished" podID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerID="829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a" exitCode=0 Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.477364 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws5rh" event={"ID":"2cc15a77-913d-42f5-90b8-116f12bcf87d","Type":"ContainerDied","Data":"829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a"} Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.492237 4983 generic.go:334] "Generic (PLEG): container finished" podID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerID="f540e1e2c243d8779203e05a1f2ca62e027f1b464814f6b5e5f5832a43a1e0e1" exitCode=0 Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.492432 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzpql" event={"ID":"861534ba-185f-47e0-a0dd-4ce6e14c80ca","Type":"ContainerDied","Data":"f540e1e2c243d8779203e05a1f2ca62e027f1b464814f6b5e5f5832a43a1e0e1"} Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.500370 4983 generic.go:334] "Generic (PLEG): container finished" podID="bbad7ed5-4e2f-4c15-98f6-88b58a937f18" containerID="f4bd79ebe0944ee7635d29d620a237c7163baf5fada0dfaab2b7af636c0a80cc" exitCode=0 Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.500620 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" event={"ID":"bbad7ed5-4e2f-4c15-98f6-88b58a937f18","Type":"ContainerDied","Data":"f4bd79ebe0944ee7635d29d620a237c7163baf5fada0dfaab2b7af636c0a80cc"} Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.500665 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" event={"ID":"bbad7ed5-4e2f-4c15-98f6-88b58a937f18","Type":"ContainerStarted","Data":"285cb583ae883947b7f80df67d29371fd89d962353b709f7add1ac53541dc631"} Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.503816 4983 generic.go:334] "Generic (PLEG): container finished" podID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerID="54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf" exitCode=0 Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.503916 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrbgw" event={"ID":"620f0ca8-67c8-45fc-a41b-5af30115fc1b","Type":"ContainerDied","Data":"54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf"} Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.509345 4983 generic.go:334] "Generic (PLEG): container finished" podID="92386321-ac04-4379-b4cb-7111d7328dad" containerID="86bec4767168b2a8b1b7c1af7a602adc1564c330918315478acf76c949754742" exitCode=0 Nov 25 20:30:12 crc kubenswrapper[4983]: I1125 20:30:12.510204 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ss2qz" event={"ID":"92386321-ac04-4379-b4cb-7111d7328dad","Type":"ContainerDied","Data":"86bec4767168b2a8b1b7c1af7a602adc1564c330918315478acf76c949754742"} Nov 25 20:30:12 crc kubenswrapper[4983]: E1125 20:30:12.514892 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rs4kr" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" Nov 25 20:30:12 crc kubenswrapper[4983]: E1125 20:30:12.520038 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4zs2f" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" Nov 25 20:30:12 crc kubenswrapper[4983]: E1125 20:30:12.521426 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m997t" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.518070 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzpql" event={"ID":"861534ba-185f-47e0-a0dd-4ce6e14c80ca","Type":"ContainerStarted","Data":"cec0001da0d5a37daef1442b17ea19e390f6a4f6918e23f2c3574191af9384b6"} Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.521807 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrbgw" event={"ID":"620f0ca8-67c8-45fc-a41b-5af30115fc1b","Type":"ContainerStarted","Data":"f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e"} Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.524015 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ss2qz" event={"ID":"92386321-ac04-4379-b4cb-7111d7328dad","Type":"ContainerStarted","Data":"df22c72001a8368053abaf009bdf5e19d5548207184c5d6b756b76f76e341084"} Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.526880 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws5rh" event={"ID":"2cc15a77-913d-42f5-90b8-116f12bcf87d","Type":"ContainerStarted","Data":"1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd"} Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.582462 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xzpql" podStartSLOduration=2.494456163 podStartE2EDuration="34.582446241s" podCreationTimestamp="2025-11-25 20:29:39 +0000 UTC" firstStartedPulling="2025-11-25 20:29:40.94320428 +0000 UTC m=+162.055737672" lastFinishedPulling="2025-11-25 20:30:13.031194328 +0000 UTC m=+194.143727750" observedRunningTime="2025-11-25 20:30:13.553087798 +0000 UTC m=+194.665621230" watchObservedRunningTime="2025-11-25 20:30:13.582446241 +0000 UTC m=+194.694979633" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.582601 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ss2qz" podStartSLOduration=2.62933768 podStartE2EDuration="33.582596005s" podCreationTimestamp="2025-11-25 20:29:40 +0000 UTC" firstStartedPulling="2025-11-25 20:29:41.969647899 +0000 UTC m=+163.082181291" lastFinishedPulling="2025-11-25 20:30:12.922906224 +0000 UTC m=+194.035439616" observedRunningTime="2025-11-25 20:30:13.580416718 +0000 UTC m=+194.692950120" watchObservedRunningTime="2025-11-25 20:30:13.582596005 +0000 UTC m=+194.695129417" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.601849 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ws5rh" podStartSLOduration=2.649626366 podStartE2EDuration="33.601834324s" podCreationTimestamp="2025-11-25 20:29:40 +0000 UTC" firstStartedPulling="2025-11-25 20:29:41.957965906 +0000 UTC m=+163.070499298" lastFinishedPulling="2025-11-25 20:30:12.910173854 +0000 UTC m=+194.022707256" observedRunningTime="2025-11-25 20:30:13.60049938 +0000 UTC m=+194.713032772" watchObservedRunningTime="2025-11-25 20:30:13.601834324 +0000 UTC m=+194.714367716" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.623783 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hrbgw" podStartSLOduration=2.351800613 podStartE2EDuration="36.623757424s" podCreationTimestamp="2025-11-25 20:29:37 +0000 UTC" firstStartedPulling="2025-11-25 20:29:38.818125927 +0000 UTC m=+159.930659319" lastFinishedPulling="2025-11-25 20:30:13.090082738 +0000 UTC m=+194.202616130" observedRunningTime="2025-11-25 20:30:13.622218934 +0000 UTC m=+194.734752326" watchObservedRunningTime="2025-11-25 20:30:13.623757424 +0000 UTC m=+194.736290816" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.772631 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.871259 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2l9c\" (UniqueName: \"kubernetes.io/projected/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-kube-api-access-r2l9c\") pod \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.871436 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-secret-volume\") pod \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.871487 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-config-volume\") pod \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\" (UID: \"bbad7ed5-4e2f-4c15-98f6-88b58a937f18\") " Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.872586 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-config-volume" (OuterVolumeSpecName: "config-volume") pod "bbad7ed5-4e2f-4c15-98f6-88b58a937f18" (UID: "bbad7ed5-4e2f-4c15-98f6-88b58a937f18"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.876705 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bbad7ed5-4e2f-4c15-98f6-88b58a937f18" (UID: "bbad7ed5-4e2f-4c15-98f6-88b58a937f18"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.892916 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-kube-api-access-r2l9c" (OuterVolumeSpecName: "kube-api-access-r2l9c") pod "bbad7ed5-4e2f-4c15-98f6-88b58a937f18" (UID: "bbad7ed5-4e2f-4c15-98f6-88b58a937f18"). InnerVolumeSpecName "kube-api-access-r2l9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.974227 4983 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.974275 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:13 crc kubenswrapper[4983]: I1125 20:30:13.974294 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2l9c\" (UniqueName: \"kubernetes.io/projected/bbad7ed5-4e2f-4c15-98f6-88b58a937f18-kube-api-access-r2l9c\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:14 crc kubenswrapper[4983]: I1125 20:30:14.535422 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" Nov 25 20:30:14 crc kubenswrapper[4983]: I1125 20:30:14.535589 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8" event={"ID":"bbad7ed5-4e2f-4c15-98f6-88b58a937f18","Type":"ContainerDied","Data":"285cb583ae883947b7f80df67d29371fd89d962353b709f7add1ac53541dc631"} Nov 25 20:30:14 crc kubenswrapper[4983]: I1125 20:30:14.535935 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285cb583ae883947b7f80df67d29371fd89d962353b709f7add1ac53541dc631" Nov 25 20:30:17 crc kubenswrapper[4983]: I1125 20:30:17.611428 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:30:17 crc kubenswrapper[4983]: I1125 20:30:17.611616 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:30:17 crc kubenswrapper[4983]: I1125 20:30:17.978085 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:30:18 crc kubenswrapper[4983]: I1125 20:30:18.608477 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:30:19 crc kubenswrapper[4983]: I1125 20:30:19.566132 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:30:19 crc kubenswrapper[4983]: I1125 20:30:19.566532 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:30:19 crc kubenswrapper[4983]: I1125 20:30:19.613152 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:30:19 crc kubenswrapper[4983]: I1125 20:30:19.823525 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrbgw"] Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.351711 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.353060 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.390019 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.569400 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hrbgw" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="registry-server" containerID="cri-o://f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e" gracePeriod=2 Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.602637 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.602905 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.614925 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.616325 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:30:20 crc kubenswrapper[4983]: I1125 20:30:20.647957 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.089619 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.202801 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-catalog-content\") pod \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.202945 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-utilities\") pod \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.203032 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7xw8\" (UniqueName: \"kubernetes.io/projected/620f0ca8-67c8-45fc-a41b-5af30115fc1b-kube-api-access-w7xw8\") pod \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\" (UID: \"620f0ca8-67c8-45fc-a41b-5af30115fc1b\") " Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.203818 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-utilities" (OuterVolumeSpecName: "utilities") pod "620f0ca8-67c8-45fc-a41b-5af30115fc1b" (UID: "620f0ca8-67c8-45fc-a41b-5af30115fc1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.208344 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620f0ca8-67c8-45fc-a41b-5af30115fc1b-kube-api-access-w7xw8" (OuterVolumeSpecName: "kube-api-access-w7xw8") pod "620f0ca8-67c8-45fc-a41b-5af30115fc1b" (UID: "620f0ca8-67c8-45fc-a41b-5af30115fc1b"). InnerVolumeSpecName "kube-api-access-w7xw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.258829 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "620f0ca8-67c8-45fc-a41b-5af30115fc1b" (UID: "620f0ca8-67c8-45fc-a41b-5af30115fc1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.304068 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7xw8\" (UniqueName: \"kubernetes.io/projected/620f0ca8-67c8-45fc-a41b-5af30115fc1b-kube-api-access-w7xw8\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.304107 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.304116 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/620f0ca8-67c8-45fc-a41b-5af30115fc1b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.575921 4983 generic.go:334] "Generic (PLEG): container finished" podID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerID="f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e" exitCode=0 Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.577974 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrbgw" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.581666 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrbgw" event={"ID":"620f0ca8-67c8-45fc-a41b-5af30115fc1b","Type":"ContainerDied","Data":"f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e"} Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.581710 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrbgw" event={"ID":"620f0ca8-67c8-45fc-a41b-5af30115fc1b","Type":"ContainerDied","Data":"136e5584b7f00d212c3d7bf8e8e0d3dafcf9ea39e27541dbb26b6fd5efa80080"} Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.581729 4983 scope.go:117] "RemoveContainer" containerID="f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.601101 4983 scope.go:117] "RemoveContainer" containerID="54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.619452 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrbgw"] Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.623624 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hrbgw"] Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.630591 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.630721 4983 scope.go:117] "RemoveContainer" containerID="9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.648173 4983 scope.go:117] "RemoveContainer" containerID="f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e" Nov 25 20:30:21 crc kubenswrapper[4983]: E1125 20:30:21.648780 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e\": container with ID starting with f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e not found: ID does not exist" containerID="f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.648940 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e"} err="failed to get container status \"f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e\": rpc error: code = NotFound desc = could not find container \"f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e\": container with ID starting with f811e4389814aee3aaa5381189418eea3d71d32ea2eb677c90e528896d64230e not found: ID does not exist" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.649098 4983 scope.go:117] "RemoveContainer" containerID="54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf" Nov 25 20:30:21 crc kubenswrapper[4983]: E1125 20:30:21.649623 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf\": container with ID starting with 54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf not found: ID does not exist" containerID="54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.649757 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf"} err="failed to get container status \"54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf\": rpc error: code = NotFound desc = could not find container \"54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf\": container with ID starting with 54bf16fcfd7b967a7aa907fdcfeb13a8d7204e5c9c31598dbc32c1fb7e7a5abf not found: ID does not exist" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.649861 4983 scope.go:117] "RemoveContainer" containerID="9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6" Nov 25 20:30:21 crc kubenswrapper[4983]: E1125 20:30:21.650187 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6\": container with ID starting with 9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6 not found: ID does not exist" containerID="9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6" Nov 25 20:30:21 crc kubenswrapper[4983]: I1125 20:30:21.650293 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6"} err="failed to get container status \"9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6\": rpc error: code = NotFound desc = could not find container \"9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6\": container with ID starting with 9cb0a1b8d699e7225570b15cfdf19261fcd314a612ee47a4ef8062a16ffd11f6 not found: ID does not exist" Nov 25 20:30:23 crc kubenswrapper[4983]: I1125 20:30:23.612422 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" path="/var/lib/kubelet/pods/620f0ca8-67c8-45fc-a41b-5af30115fc1b/volumes" Nov 25 20:30:24 crc kubenswrapper[4983]: I1125 20:30:24.036579 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ws5rh"] Nov 25 20:30:24 crc kubenswrapper[4983]: I1125 20:30:24.037103 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ws5rh" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="registry-server" containerID="cri-o://1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd" gracePeriod=2 Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.278948 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.469215 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv942\" (UniqueName: \"kubernetes.io/projected/2cc15a77-913d-42f5-90b8-116f12bcf87d-kube-api-access-rv942\") pod \"2cc15a77-913d-42f5-90b8-116f12bcf87d\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.469381 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-catalog-content\") pod \"2cc15a77-913d-42f5-90b8-116f12bcf87d\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.469422 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-utilities\") pod \"2cc15a77-913d-42f5-90b8-116f12bcf87d\" (UID: \"2cc15a77-913d-42f5-90b8-116f12bcf87d\") " Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.470520 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-utilities" (OuterVolumeSpecName: "utilities") pod "2cc15a77-913d-42f5-90b8-116f12bcf87d" (UID: "2cc15a77-913d-42f5-90b8-116f12bcf87d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.474704 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cc15a77-913d-42f5-90b8-116f12bcf87d-kube-api-access-rv942" (OuterVolumeSpecName: "kube-api-access-rv942") pod "2cc15a77-913d-42f5-90b8-116f12bcf87d" (UID: "2cc15a77-913d-42f5-90b8-116f12bcf87d"). InnerVolumeSpecName "kube-api-access-rv942". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.571040 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.571074 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv942\" (UniqueName: \"kubernetes.io/projected/2cc15a77-913d-42f5-90b8-116f12bcf87d-kube-api-access-rv942\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.598458 4983 generic.go:334] "Generic (PLEG): container finished" podID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerID="1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd" exitCode=0 Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.598498 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws5rh" event={"ID":"2cc15a77-913d-42f5-90b8-116f12bcf87d","Type":"ContainerDied","Data":"1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd"} Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.598526 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ws5rh" event={"ID":"2cc15a77-913d-42f5-90b8-116f12bcf87d","Type":"ContainerDied","Data":"d0dce9e4f8bb8269ea36daa89fda8eca795ee27338654b29519117df178d7e74"} Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.598528 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ws5rh" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.598543 4983 scope.go:117] "RemoveContainer" containerID="1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.625588 4983 scope.go:117] "RemoveContainer" containerID="829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.639742 4983 scope.go:117] "RemoveContainer" containerID="8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.678156 4983 scope.go:117] "RemoveContainer" containerID="1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd" Nov 25 20:30:25 crc kubenswrapper[4983]: E1125 20:30:25.680472 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd\": container with ID starting with 1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd not found: ID does not exist" containerID="1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.680528 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd"} err="failed to get container status \"1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd\": rpc error: code = NotFound desc = could not find container \"1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd\": container with ID starting with 1632239742df02852f41cb9b6f3e2aff17c3bf22192f901ee28c748df3d73cfd not found: ID does not exist" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.680590 4983 scope.go:117] "RemoveContainer" containerID="829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a" Nov 25 20:30:25 crc kubenswrapper[4983]: E1125 20:30:25.680995 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a\": container with ID starting with 829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a not found: ID does not exist" containerID="829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.681045 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a"} err="failed to get container status \"829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a\": rpc error: code = NotFound desc = could not find container \"829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a\": container with ID starting with 829535a06e4c911f6648272930c8ad56f009d05d2599a2a9ac3f79d3ff48b60a not found: ID does not exist" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.681086 4983 scope.go:117] "RemoveContainer" containerID="8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07" Nov 25 20:30:25 crc kubenswrapper[4983]: E1125 20:30:25.682367 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07\": container with ID starting with 8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07 not found: ID does not exist" containerID="8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.682427 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07"} err="failed to get container status \"8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07\": rpc error: code = NotFound desc = could not find container \"8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07\": container with ID starting with 8a3dee91c0dfe952230557a5185562c1ed103b329fbde506a8af6dfce4dc8d07 not found: ID does not exist" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.961715 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cc15a77-913d-42f5-90b8-116f12bcf87d" (UID: "2cc15a77-913d-42f5-90b8-116f12bcf87d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:25 crc kubenswrapper[4983]: I1125 20:30:25.979236 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc15a77-913d-42f5-90b8-116f12bcf87d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:26 crc kubenswrapper[4983]: I1125 20:30:26.224504 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ws5rh"] Nov 25 20:30:26 crc kubenswrapper[4983]: I1125 20:30:26.230151 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ws5rh"] Nov 25 20:30:27 crc kubenswrapper[4983]: I1125 20:30:27.614023 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" path="/var/lib/kubelet/pods/2cc15a77-913d-42f5-90b8-116f12bcf87d/volumes" Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.628935 4983 generic.go:334] "Generic (PLEG): container finished" podID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerID="2a85195ce885738278c721875c62b24ba11068114e939025489fca0a1962d4f6" exitCode=0 Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.629317 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5mqd" event={"ID":"cb2d46db-fa4e-4967-89d2-e6993f05bb90","Type":"ContainerDied","Data":"2a85195ce885738278c721875c62b24ba11068114e939025489fca0a1962d4f6"} Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.633770 4983 generic.go:334] "Generic (PLEG): container finished" podID="33636a92-6a39-4007-b537-94bdfa5c9191" containerID="c3e47a762fda4e7e5df60068bd57c7f929f4f34d96727c08486ba5f6c3283636" exitCode=0 Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.633847 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zs2f" event={"ID":"33636a92-6a39-4007-b537-94bdfa5c9191","Type":"ContainerDied","Data":"c3e47a762fda4e7e5df60068bd57c7f929f4f34d96727c08486ba5f6c3283636"} Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.637164 4983 generic.go:334] "Generic (PLEG): container finished" podID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerID="e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a" exitCode=0 Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.637229 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerDied","Data":"e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a"} Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.639687 4983 generic.go:334] "Generic (PLEG): container finished" podID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerID="c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a" exitCode=0 Nov 25 20:30:28 crc kubenswrapper[4983]: I1125 20:30:28.639715 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerDied","Data":"c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a"} Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.647135 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerStarted","Data":"47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779"} Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.649965 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5mqd" event={"ID":"cb2d46db-fa4e-4967-89d2-e6993f05bb90","Type":"ContainerStarted","Data":"389aa7993e6293a335937f2614f3c339ff0c2b1b85d3502125163d79438b3972"} Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.652017 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zs2f" event={"ID":"33636a92-6a39-4007-b537-94bdfa5c9191","Type":"ContainerStarted","Data":"528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6"} Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.654090 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerStarted","Data":"40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3"} Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.672098 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rs4kr" podStartSLOduration=2.408078165 podStartE2EDuration="52.672081491s" podCreationTimestamp="2025-11-25 20:29:37 +0000 UTC" firstStartedPulling="2025-11-25 20:29:38.801380242 +0000 UTC m=+159.913913634" lastFinishedPulling="2025-11-25 20:30:29.065383568 +0000 UTC m=+210.177916960" observedRunningTime="2025-11-25 20:30:29.668141509 +0000 UTC m=+210.780674901" watchObservedRunningTime="2025-11-25 20:30:29.672081491 +0000 UTC m=+210.784614883" Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.685593 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v5mqd" podStartSLOduration=2.383413254 podStartE2EDuration="52.68551567s" podCreationTimestamp="2025-11-25 20:29:37 +0000 UTC" firstStartedPulling="2025-11-25 20:29:38.811777152 +0000 UTC m=+159.924310544" lastFinishedPulling="2025-11-25 20:30:29.113879568 +0000 UTC m=+210.226412960" observedRunningTime="2025-11-25 20:30:29.683915948 +0000 UTC m=+210.796449350" watchObservedRunningTime="2025-11-25 20:30:29.68551567 +0000 UTC m=+210.798049062" Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.713179 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4zs2f" podStartSLOduration=3.501330818 podStartE2EDuration="53.713157658s" podCreationTimestamp="2025-11-25 20:29:36 +0000 UTC" firstStartedPulling="2025-11-25 20:29:38.829312338 +0000 UTC m=+159.941845740" lastFinishedPulling="2025-11-25 20:30:29.041139188 +0000 UTC m=+210.153672580" observedRunningTime="2025-11-25 20:30:29.709862503 +0000 UTC m=+210.822395895" watchObservedRunningTime="2025-11-25 20:30:29.713157658 +0000 UTC m=+210.825691050" Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.739837 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m997t" podStartSLOduration=3.574729831 podStartE2EDuration="50.739821221s" podCreationTimestamp="2025-11-25 20:29:39 +0000 UTC" firstStartedPulling="2025-11-25 20:29:41.974926797 +0000 UTC m=+163.087460189" lastFinishedPulling="2025-11-25 20:30:29.140018187 +0000 UTC m=+210.252551579" observedRunningTime="2025-11-25 20:30:29.739606855 +0000 UTC m=+210.852140247" watchObservedRunningTime="2025-11-25 20:30:29.739821221 +0000 UTC m=+210.852354613" Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.963318 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:30:29 crc kubenswrapper[4983]: I1125 20:30:29.963671 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:30:31 crc kubenswrapper[4983]: I1125 20:30:31.007242 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-m997t" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="registry-server" probeResult="failure" output=< Nov 25 20:30:31 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 20:30:31 crc kubenswrapper[4983]: > Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.227869 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.228714 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.317919 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.683802 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.683843 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.734746 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.767445 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.786335 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.799114 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.799174 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:30:37 crc kubenswrapper[4983]: I1125 20:30:37.850748 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:30:38 crc kubenswrapper[4983]: I1125 20:30:38.749729 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:30:39 crc kubenswrapper[4983]: I1125 20:30:39.928712 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:30:39 crc kubenswrapper[4983]: I1125 20:30:39.928792 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:30:39 crc kubenswrapper[4983]: I1125 20:30:39.928857 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:30:39 crc kubenswrapper[4983]: I1125 20:30:39.929741 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:30:39 crc kubenswrapper[4983]: I1125 20:30:39.929872 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c" gracePeriod=600 Nov 25 20:30:40 crc kubenswrapper[4983]: I1125 20:30:40.020994 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:30:40 crc kubenswrapper[4983]: I1125 20:30:40.090154 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:30:40 crc kubenswrapper[4983]: I1125 20:30:40.716682 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c" exitCode=0 Nov 25 20:30:40 crc kubenswrapper[4983]: I1125 20:30:40.716768 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c"} Nov 25 20:30:40 crc kubenswrapper[4983]: I1125 20:30:40.717078 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"cedbe1d0d40fe4f150b02eabc08807db470ed60486ee4e83fdd5c11bc49792fa"} Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.059356 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rs4kr"] Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.059632 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rs4kr" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="registry-server" containerID="cri-o://47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779" gracePeriod=2 Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.408646 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.505592 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-utilities\") pod \"53324730-9a91-42c0-8d11-9f6789b9deeb\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.505693 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-catalog-content\") pod \"53324730-9a91-42c0-8d11-9f6789b9deeb\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.505726 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66b97\" (UniqueName: \"kubernetes.io/projected/53324730-9a91-42c0-8d11-9f6789b9deeb-kube-api-access-66b97\") pod \"53324730-9a91-42c0-8d11-9f6789b9deeb\" (UID: \"53324730-9a91-42c0-8d11-9f6789b9deeb\") " Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.506626 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-utilities" (OuterVolumeSpecName: "utilities") pod "53324730-9a91-42c0-8d11-9f6789b9deeb" (UID: "53324730-9a91-42c0-8d11-9f6789b9deeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.514841 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53324730-9a91-42c0-8d11-9f6789b9deeb-kube-api-access-66b97" (OuterVolumeSpecName: "kube-api-access-66b97") pod "53324730-9a91-42c0-8d11-9f6789b9deeb" (UID: "53324730-9a91-42c0-8d11-9f6789b9deeb"). InnerVolumeSpecName "kube-api-access-66b97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.552617 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53324730-9a91-42c0-8d11-9f6789b9deeb" (UID: "53324730-9a91-42c0-8d11-9f6789b9deeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.607286 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66b97\" (UniqueName: \"kubernetes.io/projected/53324730-9a91-42c0-8d11-9f6789b9deeb-kube-api-access-66b97\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.607496 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.607628 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53324730-9a91-42c0-8d11-9f6789b9deeb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.726864 4983 generic.go:334] "Generic (PLEG): container finished" podID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerID="47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779" exitCode=0 Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.726953 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerDied","Data":"47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779"} Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.726972 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rs4kr" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.726998 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rs4kr" event={"ID":"53324730-9a91-42c0-8d11-9f6789b9deeb","Type":"ContainerDied","Data":"5559a1da7bfa96b611a9f0935e747c8c0bc6d4a146433b986f38e7f8f2b2b0ae"} Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.727052 4983 scope.go:117] "RemoveContainer" containerID="47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.749829 4983 scope.go:117] "RemoveContainer" containerID="c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.756259 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rs4kr"] Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.760245 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rs4kr"] Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.769774 4983 scope.go:117] "RemoveContainer" containerID="2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.783815 4983 scope.go:117] "RemoveContainer" containerID="47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779" Nov 25 20:30:41 crc kubenswrapper[4983]: E1125 20:30:41.784302 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779\": container with ID starting with 47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779 not found: ID does not exist" containerID="47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.784353 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779"} err="failed to get container status \"47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779\": rpc error: code = NotFound desc = could not find container \"47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779\": container with ID starting with 47a2fd97b523d0ffa68c0bf57bd9369d0366d12fbfe7d16cda18d8403b6df779 not found: ID does not exist" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.784389 4983 scope.go:117] "RemoveContainer" containerID="c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a" Nov 25 20:30:41 crc kubenswrapper[4983]: E1125 20:30:41.784738 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a\": container with ID starting with c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a not found: ID does not exist" containerID="c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.784780 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a"} err="failed to get container status \"c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a\": rpc error: code = NotFound desc = could not find container \"c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a\": container with ID starting with c4a954b87ba56a94e5d3efd1effdcd7712252f313acd45f85254ca1b2751ea5a not found: ID does not exist" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.784815 4983 scope.go:117] "RemoveContainer" containerID="2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c" Nov 25 20:30:41 crc kubenswrapper[4983]: E1125 20:30:41.785084 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c\": container with ID starting with 2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c not found: ID does not exist" containerID="2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c" Nov 25 20:30:41 crc kubenswrapper[4983]: I1125 20:30:41.785118 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c"} err="failed to get container status \"2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c\": rpc error: code = NotFound desc = could not find container \"2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c\": container with ID starting with 2c0f93fae0c4279b1ea01b4a689f2ba559a9c31f48a311badfba5abac51deb6c not found: ID does not exist" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.059221 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m997t"] Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.059487 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m997t" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="registry-server" containerID="cri-o://40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3" gracePeriod=2 Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.399032 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.417512 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-catalog-content\") pod \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.417616 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-utilities\") pod \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.417713 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh2b7\" (UniqueName: \"kubernetes.io/projected/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-kube-api-access-qh2b7\") pod \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\" (UID: \"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d\") " Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.419780 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-utilities" (OuterVolumeSpecName: "utilities") pod "bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" (UID: "bafc524a-ba74-4b1b-9fdb-2054db1c2a4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.425847 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-kube-api-access-qh2b7" (OuterVolumeSpecName: "kube-api-access-qh2b7") pod "bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" (UID: "bafc524a-ba74-4b1b-9fdb-2054db1c2a4d"). InnerVolumeSpecName "kube-api-access-qh2b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.442270 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" (UID: "bafc524a-ba74-4b1b-9fdb-2054db1c2a4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.518807 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.518866 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.518877 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh2b7\" (UniqueName: \"kubernetes.io/projected/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d-kube-api-access-qh2b7\") on node \"crc\" DevicePath \"\"" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.734407 4983 generic.go:334] "Generic (PLEG): container finished" podID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerID="40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3" exitCode=0 Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.734461 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m997t" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.734492 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerDied","Data":"40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3"} Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.734908 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m997t" event={"ID":"bafc524a-ba74-4b1b-9fdb-2054db1c2a4d","Type":"ContainerDied","Data":"53afbae54defe7e6731e01fed5cecb1d4b001167e692ff99301fee8175bed3ee"} Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.734936 4983 scope.go:117] "RemoveContainer" containerID="40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.755391 4983 scope.go:117] "RemoveContainer" containerID="e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.773663 4983 scope.go:117] "RemoveContainer" containerID="ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.780321 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m997t"] Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.784183 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m997t"] Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.797067 4983 scope.go:117] "RemoveContainer" containerID="40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3" Nov 25 20:30:42 crc kubenswrapper[4983]: E1125 20:30:42.797482 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3\": container with ID starting with 40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3 not found: ID does not exist" containerID="40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.797524 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3"} err="failed to get container status \"40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3\": rpc error: code = NotFound desc = could not find container \"40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3\": container with ID starting with 40037deefd0e7ff554b5f57e56664bea6c201c923d95d25d6958c54f1b7862f3 not found: ID does not exist" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.797546 4983 scope.go:117] "RemoveContainer" containerID="e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a" Nov 25 20:30:42 crc kubenswrapper[4983]: E1125 20:30:42.797836 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a\": container with ID starting with e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a not found: ID does not exist" containerID="e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.797858 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a"} err="failed to get container status \"e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a\": rpc error: code = NotFound desc = could not find container \"e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a\": container with ID starting with e68725a4d8b3a5be4f5217f02642a744491306179c37324f860461165985eb3a not found: ID does not exist" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.797887 4983 scope.go:117] "RemoveContainer" containerID="ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be" Nov 25 20:30:42 crc kubenswrapper[4983]: E1125 20:30:42.798177 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be\": container with ID starting with ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be not found: ID does not exist" containerID="ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be" Nov 25 20:30:42 crc kubenswrapper[4983]: I1125 20:30:42.798276 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be"} err="failed to get container status \"ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be\": rpc error: code = NotFound desc = could not find container \"ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be\": container with ID starting with ccb27ed1427fa79d129cb531898765d0ce1b2f97c42794c3812d422ca05441be not found: ID does not exist" Nov 25 20:30:43 crc kubenswrapper[4983]: I1125 20:30:43.611155 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" path="/var/lib/kubelet/pods/53324730-9a91-42c0-8d11-9f6789b9deeb/volumes" Nov 25 20:30:43 crc kubenswrapper[4983]: I1125 20:30:43.611813 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" path="/var/lib/kubelet/pods/bafc524a-ba74-4b1b-9fdb-2054db1c2a4d/volumes" Nov 25 20:30:50 crc kubenswrapper[4983]: I1125 20:30:50.615323 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9zs6k"] Nov 25 20:31:15 crc kubenswrapper[4983]: I1125 20:31:15.648520 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" podUID="d10a20ce-f44b-45b4-b199-759adf792fe0" containerName="oauth-openshift" containerID="cri-o://15aa319f4cc2213a57086dedd6c32607d2c2bf01e67f39c9517997063e61f77e" gracePeriod=15 Nov 25 20:31:15 crc kubenswrapper[4983]: I1125 20:31:15.924159 4983 generic.go:334] "Generic (PLEG): container finished" podID="d10a20ce-f44b-45b4-b199-759adf792fe0" containerID="15aa319f4cc2213a57086dedd6c32607d2c2bf01e67f39c9517997063e61f77e" exitCode=0 Nov 25 20:31:15 crc kubenswrapper[4983]: I1125 20:31:15.924270 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" event={"ID":"d10a20ce-f44b-45b4-b199-759adf792fe0","Type":"ContainerDied","Data":"15aa319f4cc2213a57086dedd6c32607d2c2bf01e67f39c9517997063e61f77e"} Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.097161 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.137476 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6bf5fff678-n8vsq"] Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.137812 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.137842 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.137866 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.137879 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.137896 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d10a20ce-f44b-45b4-b199-759adf792fe0" containerName="oauth-openshift" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.137909 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="d10a20ce-f44b-45b4-b199-759adf792fe0" containerName="oauth-openshift" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.137930 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.137943 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.137964 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.137977 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.137995 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138008 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138035 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138048 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138064 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138077 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138099 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138111 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138132 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138145 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138163 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbad7ed5-4e2f-4c15-98f6-88b58a937f18" containerName="collect-profiles" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138176 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbad7ed5-4e2f-4c15-98f6-88b58a937f18" containerName="collect-profiles" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138195 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138208 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138227 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138240 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="extract-content" Nov 25 20:31:16 crc kubenswrapper[4983]: E1125 20:31:16.138259 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138272 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="extract-utilities" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138445 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbad7ed5-4e2f-4c15-98f6-88b58a937f18" containerName="collect-profiles" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138462 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="620f0ca8-67c8-45fc-a41b-5af30115fc1b" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138488 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafc524a-ba74-4b1b-9fdb-2054db1c2a4d" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138533 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="53324730-9a91-42c0-8d11-9f6789b9deeb" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138582 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cc15a77-913d-42f5-90b8-116f12bcf87d" containerName="registry-server" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.138598 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="d10a20ce-f44b-45b4-b199-759adf792fe0" containerName="oauth-openshift" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.139163 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.167951 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bf5fff678-n8vsq"] Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255670 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5tvx\" (UniqueName: \"kubernetes.io/projected/d10a20ce-f44b-45b4-b199-759adf792fe0-kube-api-access-g5tvx\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255751 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-error\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255826 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-idp-0-file-data\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255900 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-router-certs\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255928 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-serving-cert\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255954 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-session\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.255976 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-login\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256000 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-ocp-branding-template\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256023 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-trusted-ca-bundle\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256047 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-cliconfig\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256076 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-provider-selection\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256106 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-policies\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256142 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-dir\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256166 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-service-ca\") pod \"d10a20ce-f44b-45b4-b199-759adf792fe0\" (UID: \"d10a20ce-f44b-45b4-b199-759adf792fe0\") " Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256353 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256411 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-login\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256443 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-error\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256467 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256492 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256534 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256887 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-session\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256914 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256947 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-audit-policies\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.256976 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.257078 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b01a0dc-7665-4c23-9571-09d02b41253f-audit-dir\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.257100 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd2f8\" (UniqueName: \"kubernetes.io/projected/6b01a0dc-7665-4c23-9571-09d02b41253f-kube-api-access-fd2f8\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.257123 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.257155 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.257260 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.257279 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.260693 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.260772 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.261950 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.262278 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.262458 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.262746 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.263252 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.263596 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.263850 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.264021 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10a20ce-f44b-45b4-b199-759adf792fe0-kube-api-access-g5tvx" (OuterVolumeSpecName: "kube-api-access-g5tvx") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "kube-api-access-g5tvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.264540 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.265933 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d10a20ce-f44b-45b4-b199-759adf792fe0" (UID: "d10a20ce-f44b-45b4-b199-759adf792fe0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358638 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-session\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358693 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358716 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-audit-policies\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358740 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358766 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b01a0dc-7665-4c23-9571-09d02b41253f-audit-dir\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358785 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358804 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd2f8\" (UniqueName: \"kubernetes.io/projected/6b01a0dc-7665-4c23-9571-09d02b41253f-kube-api-access-fd2f8\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358827 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358852 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358881 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-login\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358906 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-error\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358924 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358942 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.358964 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359008 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5tvx\" (UniqueName: \"kubernetes.io/projected/d10a20ce-f44b-45b4-b199-759adf792fe0-kube-api-access-g5tvx\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359021 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359031 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359043 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359052 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359062 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359072 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359081 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359090 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359099 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359109 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359118 4983 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359127 4983 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d10a20ce-f44b-45b4-b199-759adf792fe0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.359137 4983 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d10a20ce-f44b-45b4-b199-759adf792fe0-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.362399 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.364407 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6b01a0dc-7665-4c23-9571-09d02b41253f-audit-dir\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.364438 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.364657 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.365874 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.366453 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6b01a0dc-7665-4c23-9571-09d02b41253f-audit-policies\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.366637 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-error\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.368128 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.368310 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-session\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.368941 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.369012 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.369395 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-user-template-login\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.371465 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6b01a0dc-7665-4c23-9571-09d02b41253f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.379121 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd2f8\" (UniqueName: \"kubernetes.io/projected/6b01a0dc-7665-4c23-9571-09d02b41253f-kube-api-access-fd2f8\") pod \"oauth-openshift-6bf5fff678-n8vsq\" (UID: \"6b01a0dc-7665-4c23-9571-09d02b41253f\") " pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.473028 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.751903 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bf5fff678-n8vsq"] Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.932056 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" event={"ID":"6b01a0dc-7665-4c23-9571-09d02b41253f","Type":"ContainerStarted","Data":"7690f60db648229f216f67d04e1904e0a3591a6c150dc8197cd84411f545744a"} Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.935978 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" event={"ID":"d10a20ce-f44b-45b4-b199-759adf792fe0","Type":"ContainerDied","Data":"1dbc71baa75985fa402a891c0efea19a3d13f7e2e2f90c8d04e4eb2d40736148"} Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.936060 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9zs6k" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.936074 4983 scope.go:117] "RemoveContainer" containerID="15aa319f4cc2213a57086dedd6c32607d2c2bf01e67f39c9517997063e61f77e" Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.970665 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9zs6k"] Nov 25 20:31:16 crc kubenswrapper[4983]: I1125 20:31:16.973748 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9zs6k"] Nov 25 20:31:17 crc kubenswrapper[4983]: I1125 20:31:17.615619 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10a20ce-f44b-45b4-b199-759adf792fe0" path="/var/lib/kubelet/pods/d10a20ce-f44b-45b4-b199-759adf792fe0/volumes" Nov 25 20:31:17 crc kubenswrapper[4983]: I1125 20:31:17.944273 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" event={"ID":"6b01a0dc-7665-4c23-9571-09d02b41253f","Type":"ContainerStarted","Data":"f2de8da398ba285c311368f7119e180cb3ba3fd6685b53c095ce8f4e80e9458a"} Nov 25 20:31:17 crc kubenswrapper[4983]: I1125 20:31:17.944845 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:17 crc kubenswrapper[4983]: I1125 20:31:17.952351 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" Nov 25 20:31:17 crc kubenswrapper[4983]: I1125 20:31:17.973672 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bf5fff678-n8vsq" podStartSLOduration=27.973655737 podStartE2EDuration="27.973655737s" podCreationTimestamp="2025-11-25 20:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:31:17.971590283 +0000 UTC m=+259.084123675" watchObservedRunningTime="2025-11-25 20:31:17.973655737 +0000 UTC m=+259.086189129" Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.833071 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5mqd"] Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.833875 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v5mqd" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="registry-server" containerID="cri-o://389aa7993e6293a335937f2614f3c339ff0c2b1b85d3502125163d79438b3972" gracePeriod=30 Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.842079 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zs2f"] Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.842300 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4zs2f" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="registry-server" containerID="cri-o://528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6" gracePeriod=30 Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.850676 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvg4v"] Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.850886 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerName="marketplace-operator" containerID="cri-o://1bcd13bb77a0531aaa1da9520a219c71c110ae029595c14abd244a075d189d7f" gracePeriod=30 Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.867354 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzpql"] Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.867612 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xzpql" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="registry-server" containerID="cri-o://cec0001da0d5a37daef1442b17ea19e390f6a4f6918e23f2c3574191af9384b6" gracePeriod=30 Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.876305 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kh7rb"] Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.877332 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.886125 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ss2qz"] Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.886508 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ss2qz" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="registry-server" containerID="cri-o://df22c72001a8368053abaf009bdf5e19d5548207184c5d6b756b76f76e341084" gracePeriod=30 Nov 25 20:31:36 crc kubenswrapper[4983]: I1125 20:31:36.906953 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kh7rb"] Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.024102 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.024172 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.024202 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgmxf\" (UniqueName: \"kubernetes.io/projected/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-kube-api-access-tgmxf\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.059163 4983 generic.go:334] "Generic (PLEG): container finished" podID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerID="389aa7993e6293a335937f2614f3c339ff0c2b1b85d3502125163d79438b3972" exitCode=0 Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.059234 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5mqd" event={"ID":"cb2d46db-fa4e-4967-89d2-e6993f05bb90","Type":"ContainerDied","Data":"389aa7993e6293a335937f2614f3c339ff0c2b1b85d3502125163d79438b3972"} Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.060782 4983 generic.go:334] "Generic (PLEG): container finished" podID="33636a92-6a39-4007-b537-94bdfa5c9191" containerID="528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6" exitCode=0 Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.060816 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zs2f" event={"ID":"33636a92-6a39-4007-b537-94bdfa5c9191","Type":"ContainerDied","Data":"528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6"} Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.062226 4983 generic.go:334] "Generic (PLEG): container finished" podID="92386321-ac04-4379-b4cb-7111d7328dad" containerID="df22c72001a8368053abaf009bdf5e19d5548207184c5d6b756b76f76e341084" exitCode=0 Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.062260 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ss2qz" event={"ID":"92386321-ac04-4379-b4cb-7111d7328dad","Type":"ContainerDied","Data":"df22c72001a8368053abaf009bdf5e19d5548207184c5d6b756b76f76e341084"} Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.063318 4983 generic.go:334] "Generic (PLEG): container finished" podID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerID="1bcd13bb77a0531aaa1da9520a219c71c110ae029595c14abd244a075d189d7f" exitCode=0 Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.063379 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" event={"ID":"50f76fe0-cc37-4a22-bb1a-7df5d6012224","Type":"ContainerDied","Data":"1bcd13bb77a0531aaa1da9520a219c71c110ae029595c14abd244a075d189d7f"} Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.065248 4983 generic.go:334] "Generic (PLEG): container finished" podID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerID="cec0001da0d5a37daef1442b17ea19e390f6a4f6918e23f2c3574191af9384b6" exitCode=0 Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.065271 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzpql" event={"ID":"861534ba-185f-47e0-a0dd-4ce6e14c80ca","Type":"ContainerDied","Data":"cec0001da0d5a37daef1442b17ea19e390f6a4f6918e23f2c3574191af9384b6"} Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.125190 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.125235 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.125252 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgmxf\" (UniqueName: \"kubernetes.io/projected/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-kube-api-access-tgmxf\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.126720 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.140337 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.142415 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgmxf\" (UniqueName: \"kubernetes.io/projected/168ec053-d5d4-4ebc-956d-429c0d2ff5fb-kube-api-access-tgmxf\") pod \"marketplace-operator-79b997595-kh7rb\" (UID: \"168ec053-d5d4-4ebc-956d-429c0d2ff5fb\") " pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: E1125 20:31:37.228451 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6 is running failed: container process not found" containerID="528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 20:31:37 crc kubenswrapper[4983]: E1125 20:31:37.228931 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6 is running failed: container process not found" containerID="528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 20:31:37 crc kubenswrapper[4983]: E1125 20:31:37.229157 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6 is running failed: container process not found" containerID="528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 20:31:37 crc kubenswrapper[4983]: E1125 20:31:37.229203 4983 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-4zs2f" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="registry-server" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.287811 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.291823 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.295048 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.301631 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.306033 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.335316 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430564 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgplg\" (UniqueName: \"kubernetes.io/projected/cb2d46db-fa4e-4967-89d2-e6993f05bb90-kube-api-access-sgplg\") pod \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430642 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-utilities\") pod \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430670 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zsqk\" (UniqueName: \"kubernetes.io/projected/861534ba-185f-47e0-a0dd-4ce6e14c80ca-kube-api-access-8zsqk\") pod \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430710 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlfp7\" (UniqueName: \"kubernetes.io/projected/50f76fe0-cc37-4a22-bb1a-7df5d6012224-kube-api-access-xlfp7\") pod \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430835 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca\") pod \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430869 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbppk\" (UniqueName: \"kubernetes.io/projected/92386321-ac04-4379-b4cb-7111d7328dad-kube-api-access-hbppk\") pod \"92386321-ac04-4379-b4cb-7111d7328dad\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430891 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-utilities\") pod \"33636a92-6a39-4007-b537-94bdfa5c9191\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430963 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics\") pod \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\" (UID: \"50f76fe0-cc37-4a22-bb1a-7df5d6012224\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.430983 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-catalog-content\") pod \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\" (UID: \"861534ba-185f-47e0-a0dd-4ce6e14c80ca\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.431011 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-utilities\") pod \"92386321-ac04-4379-b4cb-7111d7328dad\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.431026 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-catalog-content\") pod \"92386321-ac04-4379-b4cb-7111d7328dad\" (UID: \"92386321-ac04-4379-b4cb-7111d7328dad\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.431045 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-utilities\") pod \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.431069 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-catalog-content\") pod \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\" (UID: \"cb2d46db-fa4e-4967-89d2-e6993f05bb90\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.431461 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-utilities" (OuterVolumeSpecName: "utilities") pod "861534ba-185f-47e0-a0dd-4ce6e14c80ca" (UID: "861534ba-185f-47e0-a0dd-4ce6e14c80ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.432103 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-utilities" (OuterVolumeSpecName: "utilities") pod "92386321-ac04-4379-b4cb-7111d7328dad" (UID: "92386321-ac04-4379-b4cb-7111d7328dad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.432297 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-utilities" (OuterVolumeSpecName: "utilities") pod "33636a92-6a39-4007-b537-94bdfa5c9191" (UID: "33636a92-6a39-4007-b537-94bdfa5c9191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.433167 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "50f76fe0-cc37-4a22-bb1a-7df5d6012224" (UID: "50f76fe0-cc37-4a22-bb1a-7df5d6012224"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.433529 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-utilities" (OuterVolumeSpecName: "utilities") pod "cb2d46db-fa4e-4967-89d2-e6993f05bb90" (UID: "cb2d46db-fa4e-4967-89d2-e6993f05bb90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.436108 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb2d46db-fa4e-4967-89d2-e6993f05bb90-kube-api-access-sgplg" (OuterVolumeSpecName: "kube-api-access-sgplg") pod "cb2d46db-fa4e-4967-89d2-e6993f05bb90" (UID: "cb2d46db-fa4e-4967-89d2-e6993f05bb90"). InnerVolumeSpecName "kube-api-access-sgplg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.436535 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "50f76fe0-cc37-4a22-bb1a-7df5d6012224" (UID: "50f76fe0-cc37-4a22-bb1a-7df5d6012224"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.439234 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861534ba-185f-47e0-a0dd-4ce6e14c80ca-kube-api-access-8zsqk" (OuterVolumeSpecName: "kube-api-access-8zsqk") pod "861534ba-185f-47e0-a0dd-4ce6e14c80ca" (UID: "861534ba-185f-47e0-a0dd-4ce6e14c80ca"). InnerVolumeSpecName "kube-api-access-8zsqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.441343 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f76fe0-cc37-4a22-bb1a-7df5d6012224-kube-api-access-xlfp7" (OuterVolumeSpecName: "kube-api-access-xlfp7") pod "50f76fe0-cc37-4a22-bb1a-7df5d6012224" (UID: "50f76fe0-cc37-4a22-bb1a-7df5d6012224"). InnerVolumeSpecName "kube-api-access-xlfp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.446882 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92386321-ac04-4379-b4cb-7111d7328dad-kube-api-access-hbppk" (OuterVolumeSpecName: "kube-api-access-hbppk") pod "92386321-ac04-4379-b4cb-7111d7328dad" (UID: "92386321-ac04-4379-b4cb-7111d7328dad"). InnerVolumeSpecName "kube-api-access-hbppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.452439 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "861534ba-185f-47e0-a0dd-4ce6e14c80ca" (UID: "861534ba-185f-47e0-a0dd-4ce6e14c80ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.479693 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb2d46db-fa4e-4967-89d2-e6993f05bb90" (UID: "cb2d46db-fa4e-4967-89d2-e6993f05bb90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.522429 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kh7rb"] Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531683 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-722g9\" (UniqueName: \"kubernetes.io/projected/33636a92-6a39-4007-b537-94bdfa5c9191-kube-api-access-722g9\") pod \"33636a92-6a39-4007-b537-94bdfa5c9191\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531717 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-catalog-content\") pod \"33636a92-6a39-4007-b537-94bdfa5c9191\" (UID: \"33636a92-6a39-4007-b537-94bdfa5c9191\") " Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531929 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgplg\" (UniqueName: \"kubernetes.io/projected/cb2d46db-fa4e-4967-89d2-e6993f05bb90-kube-api-access-sgplg\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531945 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531956 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zsqk\" (UniqueName: \"kubernetes.io/projected/861534ba-185f-47e0-a0dd-4ce6e14c80ca-kube-api-access-8zsqk\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531983 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlfp7\" (UniqueName: \"kubernetes.io/projected/50f76fe0-cc37-4a22-bb1a-7df5d6012224-kube-api-access-xlfp7\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.531992 4983 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532001 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbppk\" (UniqueName: \"kubernetes.io/projected/92386321-ac04-4379-b4cb-7111d7328dad-kube-api-access-hbppk\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532009 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532016 4983 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50f76fe0-cc37-4a22-bb1a-7df5d6012224-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532026 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861534ba-185f-47e0-a0dd-4ce6e14c80ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532034 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532041 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.532324 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2d46db-fa4e-4967-89d2-e6993f05bb90-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.534367 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33636a92-6a39-4007-b537-94bdfa5c9191-kube-api-access-722g9" (OuterVolumeSpecName: "kube-api-access-722g9") pod "33636a92-6a39-4007-b537-94bdfa5c9191" (UID: "33636a92-6a39-4007-b537-94bdfa5c9191"). InnerVolumeSpecName "kube-api-access-722g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.550727 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92386321-ac04-4379-b4cb-7111d7328dad" (UID: "92386321-ac04-4379-b4cb-7111d7328dad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.604986 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33636a92-6a39-4007-b537-94bdfa5c9191" (UID: "33636a92-6a39-4007-b537-94bdfa5c9191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.633371 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92386321-ac04-4379-b4cb-7111d7328dad-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.633406 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-722g9\" (UniqueName: \"kubernetes.io/projected/33636a92-6a39-4007-b537-94bdfa5c9191-kube-api-access-722g9\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:37 crc kubenswrapper[4983]: I1125 20:31:37.633418 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33636a92-6a39-4007-b537-94bdfa5c9191-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.072323 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" event={"ID":"50f76fe0-cc37-4a22-bb1a-7df5d6012224","Type":"ContainerDied","Data":"5aa163679bc5d0a943d64ad4ed76aefc958f3a0f00a546ac1176a07a32c4c771"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.072355 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cvg4v" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.072388 4983 scope.go:117] "RemoveContainer" containerID="1bcd13bb77a0531aaa1da9520a219c71c110ae029595c14abd244a075d189d7f" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.074955 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xzpql" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.075104 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xzpql" event={"ID":"861534ba-185f-47e0-a0dd-4ce6e14c80ca","Type":"ContainerDied","Data":"1b80c547e4752d0352a19f60a63d348f35d1dbe22507412b97d00cd5461e761a"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.079721 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v5mqd" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.079844 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v5mqd" event={"ID":"cb2d46db-fa4e-4967-89d2-e6993f05bb90","Type":"ContainerDied","Data":"30c63b5677bc5e22a9ee8aac4af23caef14d68ea8eec0818b7839d0b138a41f9"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.083436 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" event={"ID":"168ec053-d5d4-4ebc-956d-429c0d2ff5fb","Type":"ContainerStarted","Data":"987aea1dbe04e22b34b4bb1157697a782bd02700806e8297b5a3f44adb1bb908"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.083464 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" event={"ID":"168ec053-d5d4-4ebc-956d-429c0d2ff5fb","Type":"ContainerStarted","Data":"711c6f8dfcf392e2d1c0b95a0a4231cbcd8d617f941aa35d45006777a0d19a0a"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.083871 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.085988 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zs2f" event={"ID":"33636a92-6a39-4007-b537-94bdfa5c9191","Type":"ContainerDied","Data":"d1e9e5f0d44cd848ee2805bd316669e4b312fb0a4507ec40677ab3ca391c687d"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.086059 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zs2f" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.089600 4983 scope.go:117] "RemoveContainer" containerID="cec0001da0d5a37daef1442b17ea19e390f6a4f6918e23f2c3574191af9384b6" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.090802 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.091198 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvg4v"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.096227 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ss2qz" event={"ID":"92386321-ac04-4379-b4cb-7111d7328dad","Type":"ContainerDied","Data":"ba835cef360443954d0d986e5d5baced17cbe0e13e16b1879ac78c52f641bc55"} Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.096347 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ss2qz" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.096871 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cvg4v"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.105969 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzpql"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.108888 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xzpql"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.112725 4983 scope.go:117] "RemoveContainer" containerID="f540e1e2c243d8779203e05a1f2ca62e027f1b464814f6b5e5f5832a43a1e0e1" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.124915 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zs2f"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.128808 4983 scope.go:117] "RemoveContainer" containerID="806222fd00baf55833ca14dd0bf4796bf276c267ed0e488d7388eecb6b4afb8c" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.128884 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4zs2f"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.139677 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kh7rb" podStartSLOduration=2.139661178 podStartE2EDuration="2.139661178s" podCreationTimestamp="2025-11-25 20:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:31:38.137664806 +0000 UTC m=+279.250198198" watchObservedRunningTime="2025-11-25 20:31:38.139661178 +0000 UTC m=+279.252194570" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.148037 4983 scope.go:117] "RemoveContainer" containerID="389aa7993e6293a335937f2614f3c339ff0c2b1b85d3502125163d79438b3972" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.161115 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v5mqd"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.164380 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v5mqd"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.173482 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ss2qz"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.178795 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ss2qz"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.180494 4983 scope.go:117] "RemoveContainer" containerID="2a85195ce885738278c721875c62b24ba11068114e939025489fca0a1962d4f6" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.194018 4983 scope.go:117] "RemoveContainer" containerID="48eaf7ff90089f40e6c50970c0424e475bd86b7b4b32aa979600ef0acd1c4998" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.210905 4983 scope.go:117] "RemoveContainer" containerID="528f06b9758e215a119434fd35d7f3a17e963606e2a95dff03f8d128ff9ad3e6" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.226357 4983 scope.go:117] "RemoveContainer" containerID="c3e47a762fda4e7e5df60068bd57c7f929f4f34d96727c08486ba5f6c3283636" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.240375 4983 scope.go:117] "RemoveContainer" containerID="74d2d5b391121cfdfba611b204473c80c08cfbdd65392bb5e6beec12b8783c4f" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.255690 4983 scope.go:117] "RemoveContainer" containerID="df22c72001a8368053abaf009bdf5e19d5548207184c5d6b756b76f76e341084" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.273786 4983 scope.go:117] "RemoveContainer" containerID="86bec4767168b2a8b1b7c1af7a602adc1564c330918315478acf76c949754742" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.286848 4983 scope.go:117] "RemoveContainer" containerID="280a597ef8546dae29266f3b11317dd657ec62470aa5cc2efcdcaedd35c23b6a" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845247 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x5wpf"] Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845656 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845678 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845696 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845704 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845721 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845730 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845755 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845767 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845779 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845789 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845798 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerName="marketplace-operator" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845807 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerName="marketplace-operator" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845817 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845825 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845834 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845841 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845852 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845860 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845870 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845878 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845886 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845893 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845902 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845909 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="extract-content" Nov 25 20:31:38 crc kubenswrapper[4983]: E1125 20:31:38.845922 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.845929 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="extract-utilities" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.846075 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.846093 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.846110 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" containerName="marketplace-operator" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.846123 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="92386321-ac04-4379-b4cb-7111d7328dad" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.846137 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" containerName="registry-server" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.847131 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.849581 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.857771 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5wpf"] Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.950913 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsk5\" (UniqueName: \"kubernetes.io/projected/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-kube-api-access-5wsk5\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.950974 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-utilities\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:38 crc kubenswrapper[4983]: I1125 20:31:38.951082 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-catalog-content\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.052421 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-catalog-content\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.052487 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wsk5\" (UniqueName: \"kubernetes.io/projected/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-kube-api-access-5wsk5\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.052512 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-utilities\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.053022 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-catalog-content\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.053036 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-utilities\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.072811 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wsk5\" (UniqueName: \"kubernetes.io/projected/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-kube-api-access-5wsk5\") pod \"certified-operators-x5wpf\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.164886 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.356450 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x5wpf"] Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.449549 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvs44"] Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.450494 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.452806 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.460104 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b702438d-1a03-4bb1-9daf-3425f03a6f75-catalog-content\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.460146 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5vp\" (UniqueName: \"kubernetes.io/projected/b702438d-1a03-4bb1-9daf-3425f03a6f75-kube-api-access-bb5vp\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.460177 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b702438d-1a03-4bb1-9daf-3425f03a6f75-utilities\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.461331 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvs44"] Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.561588 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b702438d-1a03-4bb1-9daf-3425f03a6f75-utilities\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.561688 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b702438d-1a03-4bb1-9daf-3425f03a6f75-catalog-content\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.561722 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5vp\" (UniqueName: \"kubernetes.io/projected/b702438d-1a03-4bb1-9daf-3425f03a6f75-kube-api-access-bb5vp\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.562042 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b702438d-1a03-4bb1-9daf-3425f03a6f75-utilities\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.562121 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b702438d-1a03-4bb1-9daf-3425f03a6f75-catalog-content\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.578381 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5vp\" (UniqueName: \"kubernetes.io/projected/b702438d-1a03-4bb1-9daf-3425f03a6f75-kube-api-access-bb5vp\") pod \"redhat-marketplace-fvs44\" (UID: \"b702438d-1a03-4bb1-9daf-3425f03a6f75\") " pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.612370 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33636a92-6a39-4007-b537-94bdfa5c9191" path="/var/lib/kubelet/pods/33636a92-6a39-4007-b537-94bdfa5c9191/volumes" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.613318 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f76fe0-cc37-4a22-bb1a-7df5d6012224" path="/var/lib/kubelet/pods/50f76fe0-cc37-4a22-bb1a-7df5d6012224/volumes" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.613774 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="861534ba-185f-47e0-a0dd-4ce6e14c80ca" path="/var/lib/kubelet/pods/861534ba-185f-47e0-a0dd-4ce6e14c80ca/volumes" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.615321 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92386321-ac04-4379-b4cb-7111d7328dad" path="/var/lib/kubelet/pods/92386321-ac04-4379-b4cb-7111d7328dad/volumes" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.616047 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb2d46db-fa4e-4967-89d2-e6993f05bb90" path="/var/lib/kubelet/pods/cb2d46db-fa4e-4967-89d2-e6993f05bb90/volumes" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.769205 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:39 crc kubenswrapper[4983]: I1125 20:31:39.996018 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvs44"] Nov 25 20:31:40 crc kubenswrapper[4983]: W1125 20:31:40.005249 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb702438d_1a03_4bb1_9daf_3425f03a6f75.slice/crio-16f429a10ec08f1e413cba9064fa928415513f0dfbd0dcdcab5c0995a8de4a87 WatchSource:0}: Error finding container 16f429a10ec08f1e413cba9064fa928415513f0dfbd0dcdcab5c0995a8de4a87: Status 404 returned error can't find the container with id 16f429a10ec08f1e413cba9064fa928415513f0dfbd0dcdcab5c0995a8de4a87 Nov 25 20:31:40 crc kubenswrapper[4983]: I1125 20:31:40.114628 4983 generic.go:334] "Generic (PLEG): container finished" podID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerID="aa334b484199098ed473d005468e1f3cc78c91a2c59727b82134b43e065bbbbd" exitCode=0 Nov 25 20:31:40 crc kubenswrapper[4983]: I1125 20:31:40.115070 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerDied","Data":"aa334b484199098ed473d005468e1f3cc78c91a2c59727b82134b43e065bbbbd"} Nov 25 20:31:40 crc kubenswrapper[4983]: I1125 20:31:40.115524 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerStarted","Data":"82bab1276357d3905b64468feaa92a4b1a421a6c265c85c7b997fa0ca9fdb577"} Nov 25 20:31:40 crc kubenswrapper[4983]: I1125 20:31:40.116914 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvs44" event={"ID":"b702438d-1a03-4bb1-9daf-3425f03a6f75","Type":"ContainerStarted","Data":"16f429a10ec08f1e413cba9064fa928415513f0dfbd0dcdcab5c0995a8de4a87"} Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.122020 4983 generic.go:334] "Generic (PLEG): container finished" podID="b702438d-1a03-4bb1-9daf-3425f03a6f75" containerID="43866cc40f4f97c69528b704a40fd0f15a49536c790976cb89c21b11589cb401" exitCode=0 Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.122092 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvs44" event={"ID":"b702438d-1a03-4bb1-9daf-3425f03a6f75","Type":"ContainerDied","Data":"43866cc40f4f97c69528b704a40fd0f15a49536c790976cb89c21b11589cb401"} Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.126005 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerStarted","Data":"11c5b08c2145e30606b803006d6683ac773d40779612143c9c4b6995b4c4b3ae"} Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.241745 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hnbjx"] Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.242861 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.245286 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.259573 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnbjx"] Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.409153 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38faea82-52be-43bf-8cea-8144ef0bd8d5-catalog-content\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.409250 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38faea82-52be-43bf-8cea-8144ef0bd8d5-utilities\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.409348 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9xw\" (UniqueName: \"kubernetes.io/projected/38faea82-52be-43bf-8cea-8144ef0bd8d5-kube-api-access-dc9xw\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.510324 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38faea82-52be-43bf-8cea-8144ef0bd8d5-utilities\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.510937 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc9xw\" (UniqueName: \"kubernetes.io/projected/38faea82-52be-43bf-8cea-8144ef0bd8d5-kube-api-access-dc9xw\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.510970 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38faea82-52be-43bf-8cea-8144ef0bd8d5-utilities\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.511271 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38faea82-52be-43bf-8cea-8144ef0bd8d5-catalog-content\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.510970 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38faea82-52be-43bf-8cea-8144ef0bd8d5-catalog-content\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.535383 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc9xw\" (UniqueName: \"kubernetes.io/projected/38faea82-52be-43bf-8cea-8144ef0bd8d5-kube-api-access-dc9xw\") pod \"redhat-operators-hnbjx\" (UID: \"38faea82-52be-43bf-8cea-8144ef0bd8d5\") " pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.616308 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.808120 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnbjx"] Nov 25 20:31:41 crc kubenswrapper[4983]: W1125 20:31:41.810755 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38faea82_52be_43bf_8cea_8144ef0bd8d5.slice/crio-a049bd6713d2ce3d7f2f0dbb06670381676d3535a39235b7adc41e2b817a9e34 WatchSource:0}: Error finding container a049bd6713d2ce3d7f2f0dbb06670381676d3535a39235b7adc41e2b817a9e34: Status 404 returned error can't find the container with id a049bd6713d2ce3d7f2f0dbb06670381676d3535a39235b7adc41e2b817a9e34 Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.854866 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k2fp5"] Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.856192 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.859432 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2fp5"] Nov 25 20:31:41 crc kubenswrapper[4983]: I1125 20:31:41.861273 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.027816 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97fc113-da49-4a64-a324-d63d1f29f028-catalog-content\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.028317 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwfx\" (UniqueName: \"kubernetes.io/projected/b97fc113-da49-4a64-a324-d63d1f29f028-kube-api-access-nqwfx\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.028355 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97fc113-da49-4a64-a324-d63d1f29f028-utilities\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: E1125 20:31:42.062442 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38faea82_52be_43bf_8cea_8144ef0bd8d5.slice/crio-conmon-41df8240aeb84d53c725e8681cfd9995b567442e5c258dc68dd80f3229913853.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38faea82_52be_43bf_8cea_8144ef0bd8d5.slice/crio-41df8240aeb84d53c725e8681cfd9995b567442e5c258dc68dd80f3229913853.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.129791 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97fc113-da49-4a64-a324-d63d1f29f028-catalog-content\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.129898 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwfx\" (UniqueName: \"kubernetes.io/projected/b97fc113-da49-4a64-a324-d63d1f29f028-kube-api-access-nqwfx\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.129983 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97fc113-da49-4a64-a324-d63d1f29f028-utilities\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.130266 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97fc113-da49-4a64-a324-d63d1f29f028-catalog-content\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.130410 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97fc113-da49-4a64-a324-d63d1f29f028-utilities\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.131842 4983 generic.go:334] "Generic (PLEG): container finished" podID="b702438d-1a03-4bb1-9daf-3425f03a6f75" containerID="75a3f793d32bfcbbd55790739401b24e84190e6b57122912b6350c860f6899d5" exitCode=0 Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.131986 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvs44" event={"ID":"b702438d-1a03-4bb1-9daf-3425f03a6f75","Type":"ContainerDied","Data":"75a3f793d32bfcbbd55790739401b24e84190e6b57122912b6350c860f6899d5"} Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.134821 4983 generic.go:334] "Generic (PLEG): container finished" podID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerID="11c5b08c2145e30606b803006d6683ac773d40779612143c9c4b6995b4c4b3ae" exitCode=0 Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.134899 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerDied","Data":"11c5b08c2145e30606b803006d6683ac773d40779612143c9c4b6995b4c4b3ae"} Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.141729 4983 generic.go:334] "Generic (PLEG): container finished" podID="38faea82-52be-43bf-8cea-8144ef0bd8d5" containerID="41df8240aeb84d53c725e8681cfd9995b567442e5c258dc68dd80f3229913853" exitCode=0 Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.141766 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnbjx" event={"ID":"38faea82-52be-43bf-8cea-8144ef0bd8d5","Type":"ContainerDied","Data":"41df8240aeb84d53c725e8681cfd9995b567442e5c258dc68dd80f3229913853"} Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.141790 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnbjx" event={"ID":"38faea82-52be-43bf-8cea-8144ef0bd8d5","Type":"ContainerStarted","Data":"a049bd6713d2ce3d7f2f0dbb06670381676d3535a39235b7adc41e2b817a9e34"} Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.154494 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwfx\" (UniqueName: \"kubernetes.io/projected/b97fc113-da49-4a64-a324-d63d1f29f028-kube-api-access-nqwfx\") pod \"community-operators-k2fp5\" (UID: \"b97fc113-da49-4a64-a324-d63d1f29f028\") " pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.193864 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:42 crc kubenswrapper[4983]: I1125 20:31:42.382582 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k2fp5"] Nov 25 20:31:42 crc kubenswrapper[4983]: W1125 20:31:42.392106 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97fc113_da49_4a64_a324_d63d1f29f028.slice/crio-489a7f7134ecce24998f743000aca24a9df5834ecb530e74a5201a96d8def42e WatchSource:0}: Error finding container 489a7f7134ecce24998f743000aca24a9df5834ecb530e74a5201a96d8def42e: Status 404 returned error can't find the container with id 489a7f7134ecce24998f743000aca24a9df5834ecb530e74a5201a96d8def42e Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.149120 4983 generic.go:334] "Generic (PLEG): container finished" podID="b97fc113-da49-4a64-a324-d63d1f29f028" containerID="8b17103bb72d0b30106327a94d99f818be826d499628a6328ec3228c6cf2382a" exitCode=0 Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.149294 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fp5" event={"ID":"b97fc113-da49-4a64-a324-d63d1f29f028","Type":"ContainerDied","Data":"8b17103bb72d0b30106327a94d99f818be826d499628a6328ec3228c6cf2382a"} Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.149687 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fp5" event={"ID":"b97fc113-da49-4a64-a324-d63d1f29f028","Type":"ContainerStarted","Data":"489a7f7134ecce24998f743000aca24a9df5834ecb530e74a5201a96d8def42e"} Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.154230 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvs44" event={"ID":"b702438d-1a03-4bb1-9daf-3425f03a6f75","Type":"ContainerStarted","Data":"230a1886de506ee4bdb2e22ac6bff9ad15f966ae59413ef158d37e35f14c1e9a"} Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.157183 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerStarted","Data":"206f56354a19d84ebcf30f171dc4bd6b47855f01cfe07c36a1943c5ca9d5539d"} Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.191491 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvs44" podStartSLOduration=2.724285983 podStartE2EDuration="4.191474243s" podCreationTimestamp="2025-11-25 20:31:39 +0000 UTC" firstStartedPulling="2025-11-25 20:31:41.123708489 +0000 UTC m=+282.236241881" lastFinishedPulling="2025-11-25 20:31:42.590896749 +0000 UTC m=+283.703430141" observedRunningTime="2025-11-25 20:31:43.189367089 +0000 UTC m=+284.301900491" watchObservedRunningTime="2025-11-25 20:31:43.191474243 +0000 UTC m=+284.304007635" Nov 25 20:31:43 crc kubenswrapper[4983]: I1125 20:31:43.215189 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x5wpf" podStartSLOduration=2.782689319 podStartE2EDuration="5.215164759s" podCreationTimestamp="2025-11-25 20:31:38 +0000 UTC" firstStartedPulling="2025-11-25 20:31:40.115869453 +0000 UTC m=+281.228402845" lastFinishedPulling="2025-11-25 20:31:42.548344883 +0000 UTC m=+283.660878285" observedRunningTime="2025-11-25 20:31:43.211687018 +0000 UTC m=+284.324220420" watchObservedRunningTime="2025-11-25 20:31:43.215164759 +0000 UTC m=+284.327698141" Nov 25 20:31:44 crc kubenswrapper[4983]: I1125 20:31:44.164906 4983 generic.go:334] "Generic (PLEG): container finished" podID="b97fc113-da49-4a64-a324-d63d1f29f028" containerID="65809000feeb3424919ec4dabbf6e5981b2aa652a7b3508c114a66876ac023d0" exitCode=0 Nov 25 20:31:44 crc kubenswrapper[4983]: I1125 20:31:44.165060 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fp5" event={"ID":"b97fc113-da49-4a64-a324-d63d1f29f028","Type":"ContainerDied","Data":"65809000feeb3424919ec4dabbf6e5981b2aa652a7b3508c114a66876ac023d0"} Nov 25 20:31:44 crc kubenswrapper[4983]: I1125 20:31:44.168972 4983 generic.go:334] "Generic (PLEG): container finished" podID="38faea82-52be-43bf-8cea-8144ef0bd8d5" containerID="7dc6ade5f7f0ce6b0fe6a17fb8ea8c059766dab7e46e72db94c21b54c498ec36" exitCode=0 Nov 25 20:31:44 crc kubenswrapper[4983]: I1125 20:31:44.170311 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnbjx" event={"ID":"38faea82-52be-43bf-8cea-8144ef0bd8d5","Type":"ContainerDied","Data":"7dc6ade5f7f0ce6b0fe6a17fb8ea8c059766dab7e46e72db94c21b54c498ec36"} Nov 25 20:31:45 crc kubenswrapper[4983]: I1125 20:31:45.176426 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k2fp5" event={"ID":"b97fc113-da49-4a64-a324-d63d1f29f028","Type":"ContainerStarted","Data":"1fed4125d9f2e4b922645abd78d588dde9c31fa62c229e67f1ac732189453427"} Nov 25 20:31:45 crc kubenswrapper[4983]: I1125 20:31:45.178766 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnbjx" event={"ID":"38faea82-52be-43bf-8cea-8144ef0bd8d5","Type":"ContainerStarted","Data":"621f441bd726479cfd5f49548c432a820d381c0a7d904cadd5158d632b7eda5f"} Nov 25 20:31:45 crc kubenswrapper[4983]: I1125 20:31:45.196131 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k2fp5" podStartSLOduration=2.778032623 podStartE2EDuration="4.196117048s" podCreationTimestamp="2025-11-25 20:31:41 +0000 UTC" firstStartedPulling="2025-11-25 20:31:43.151921396 +0000 UTC m=+284.264454788" lastFinishedPulling="2025-11-25 20:31:44.570005811 +0000 UTC m=+285.682539213" observedRunningTime="2025-11-25 20:31:45.193390218 +0000 UTC m=+286.305923600" watchObservedRunningTime="2025-11-25 20:31:45.196117048 +0000 UTC m=+286.308650440" Nov 25 20:31:45 crc kubenswrapper[4983]: I1125 20:31:45.211266 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hnbjx" podStartSLOduration=1.587563824 podStartE2EDuration="4.211249932s" podCreationTimestamp="2025-11-25 20:31:41 +0000 UTC" firstStartedPulling="2025-11-25 20:31:42.143095185 +0000 UTC m=+283.255628577" lastFinishedPulling="2025-11-25 20:31:44.766781293 +0000 UTC m=+285.879314685" observedRunningTime="2025-11-25 20:31:45.20888311 +0000 UTC m=+286.321416512" watchObservedRunningTime="2025-11-25 20:31:45.211249932 +0000 UTC m=+286.323783324" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.166044 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.167284 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.214370 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.261115 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.769455 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.769985 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:49 crc kubenswrapper[4983]: I1125 20:31:49.815284 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:50 crc kubenswrapper[4983]: I1125 20:31:50.250673 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvs44" Nov 25 20:31:51 crc kubenswrapper[4983]: I1125 20:31:51.617134 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:51 crc kubenswrapper[4983]: I1125 20:31:51.617454 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:51 crc kubenswrapper[4983]: I1125 20:31:51.651169 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:52 crc kubenswrapper[4983]: I1125 20:31:52.194384 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:52 crc kubenswrapper[4983]: I1125 20:31:52.196262 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:52 crc kubenswrapper[4983]: I1125 20:31:52.235324 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:52 crc kubenswrapper[4983]: I1125 20:31:52.275889 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hnbjx" Nov 25 20:31:52 crc kubenswrapper[4983]: I1125 20:31:52.281126 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k2fp5" Nov 25 20:31:59 crc kubenswrapper[4983]: I1125 20:31:59.372129 4983 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 25 20:33:09 crc kubenswrapper[4983]: I1125 20:33:09.928186 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:33:09 crc kubenswrapper[4983]: I1125 20:33:09.929161 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:33:39 crc kubenswrapper[4983]: I1125 20:33:39.928168 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:33:39 crc kubenswrapper[4983]: I1125 20:33:39.928647 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:34:09 crc kubenswrapper[4983]: I1125 20:34:09.928134 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:34:09 crc kubenswrapper[4983]: I1125 20:34:09.928910 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:34:09 crc kubenswrapper[4983]: I1125 20:34:09.928987 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:34:09 crc kubenswrapper[4983]: I1125 20:34:09.930046 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cedbe1d0d40fe4f150b02eabc08807db470ed60486ee4e83fdd5c11bc49792fa"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:34:09 crc kubenswrapper[4983]: I1125 20:34:09.930169 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://cedbe1d0d40fe4f150b02eabc08807db470ed60486ee4e83fdd5c11bc49792fa" gracePeriod=600 Nov 25 20:34:10 crc kubenswrapper[4983]: I1125 20:34:10.237378 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="cedbe1d0d40fe4f150b02eabc08807db470ed60486ee4e83fdd5c11bc49792fa" exitCode=0 Nov 25 20:34:10 crc kubenswrapper[4983]: I1125 20:34:10.237517 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"cedbe1d0d40fe4f150b02eabc08807db470ed60486ee4e83fdd5c11bc49792fa"} Nov 25 20:34:10 crc kubenswrapper[4983]: I1125 20:34:10.238006 4983 scope.go:117] "RemoveContainer" containerID="fc360c08594d54c6a98916500cef38547e7347f1ddbdcda0a7fd6ec8a866be4c" Nov 25 20:34:11 crc kubenswrapper[4983]: I1125 20:34:11.247961 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"7306555a4508b1828e5cf4831dc81aad7a61440dcfa7cbd1e1c973af6958d2b0"} Nov 25 20:34:19 crc kubenswrapper[4983]: I1125 20:34:19.930499 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k54zs"] Nov 25 20:34:19 crc kubenswrapper[4983]: I1125 20:34:19.931874 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:19 crc kubenswrapper[4983]: I1125 20:34:19.940310 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k54zs"] Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.006952 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007019 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acc8760c-0ff6-422c-bfac-ce4bda16c318-registry-certificates\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acc8760c-0ff6-422c-bfac-ce4bda16c318-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007082 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2wl\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-kube-api-access-5h2wl\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007114 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-bound-sa-token\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007283 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-registry-tls\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007326 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acc8760c-0ff6-422c-bfac-ce4bda16c318-trusted-ca\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.007354 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acc8760c-0ff6-422c-bfac-ce4bda16c318-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.035880 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.108880 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acc8760c-0ff6-422c-bfac-ce4bda16c318-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.108935 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2wl\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-kube-api-access-5h2wl\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.108963 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-bound-sa-token\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.108992 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-registry-tls\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.109028 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acc8760c-0ff6-422c-bfac-ce4bda16c318-trusted-ca\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.109048 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acc8760c-0ff6-422c-bfac-ce4bda16c318-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.109087 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acc8760c-0ff6-422c-bfac-ce4bda16c318-registry-certificates\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.109970 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/acc8760c-0ff6-422c-bfac-ce4bda16c318-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.110486 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/acc8760c-0ff6-422c-bfac-ce4bda16c318-trusted-ca\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.110740 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/acc8760c-0ff6-422c-bfac-ce4bda16c318-registry-certificates\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.115049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/acc8760c-0ff6-422c-bfac-ce4bda16c318-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.115092 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-registry-tls\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.125715 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2wl\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-kube-api-access-5h2wl\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.126410 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/acc8760c-0ff6-422c-bfac-ce4bda16c318-bound-sa-token\") pod \"image-registry-66df7c8f76-k54zs\" (UID: \"acc8760c-0ff6-422c-bfac-ce4bda16c318\") " pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.249216 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:20 crc kubenswrapper[4983]: I1125 20:34:20.725444 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k54zs"] Nov 25 20:34:21 crc kubenswrapper[4983]: I1125 20:34:21.341153 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" event={"ID":"acc8760c-0ff6-422c-bfac-ce4bda16c318","Type":"ContainerStarted","Data":"e34e44949220b9f3f81f5a9990de52c8eac785f5d8942eee9642719616c46a6b"} Nov 25 20:34:21 crc kubenswrapper[4983]: I1125 20:34:21.341993 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:21 crc kubenswrapper[4983]: I1125 20:34:21.342030 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" event={"ID":"acc8760c-0ff6-422c-bfac-ce4bda16c318","Type":"ContainerStarted","Data":"48985d25562c40e7b069baf21258d4ee7cebf35fd0f80a3e9a1a491c40d254ad"} Nov 25 20:34:21 crc kubenswrapper[4983]: I1125 20:34:21.366783 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" podStartSLOduration=2.366762198 podStartE2EDuration="2.366762198s" podCreationTimestamp="2025-11-25 20:34:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:34:21.365168876 +0000 UTC m=+442.477702268" watchObservedRunningTime="2025-11-25 20:34:21.366762198 +0000 UTC m=+442.479295590" Nov 25 20:34:40 crc kubenswrapper[4983]: I1125 20:34:40.255942 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k54zs" Nov 25 20:34:40 crc kubenswrapper[4983]: I1125 20:34:40.327685 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gznhv"] Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.384768 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" podUID="cb434a7b-12ca-4505-b66c-5d5bf4178d12" containerName="registry" containerID="cri-o://5eb054a66a1b4fcdd5b233d61cc8cdbe6eb54b449ff848fe77f43f6c0f7cf82d" gracePeriod=30 Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.630301 4983 generic.go:334] "Generic (PLEG): container finished" podID="cb434a7b-12ca-4505-b66c-5d5bf4178d12" containerID="5eb054a66a1b4fcdd5b233d61cc8cdbe6eb54b449ff848fe77f43f6c0f7cf82d" exitCode=0 Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.630342 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" event={"ID":"cb434a7b-12ca-4505-b66c-5d5bf4178d12","Type":"ContainerDied","Data":"5eb054a66a1b4fcdd5b233d61cc8cdbe6eb54b449ff848fe77f43f6c0f7cf82d"} Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.702851 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809400 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-bound-sa-token\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809508 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-certificates\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809626 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb434a7b-12ca-4505-b66c-5d5bf4178d12-installation-pull-secrets\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809656 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb434a7b-12ca-4505-b66c-5d5bf4178d12-ca-trust-extracted\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809685 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-tls\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809708 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqmd6\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-kube-api-access-rqmd6\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809732 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-trusted-ca\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.809882 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\" (UID: \"cb434a7b-12ca-4505-b66c-5d5bf4178d12\") " Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.811017 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.811120 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.815236 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb434a7b-12ca-4505-b66c-5d5bf4178d12-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.815320 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.815786 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.820283 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.824688 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb434a7b-12ca-4505-b66c-5d5bf4178d12-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.826936 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-kube-api-access-rqmd6" (OuterVolumeSpecName: "kube-api-access-rqmd6") pod "cb434a7b-12ca-4505-b66c-5d5bf4178d12" (UID: "cb434a7b-12ca-4505-b66c-5d5bf4178d12"). InnerVolumeSpecName "kube-api-access-rqmd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911226 4983 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911264 4983 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cb434a7b-12ca-4505-b66c-5d5bf4178d12-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911275 4983 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cb434a7b-12ca-4505-b66c-5d5bf4178d12-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911286 4983 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911298 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqmd6\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-kube-api-access-rqmd6\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911308 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb434a7b-12ca-4505-b66c-5d5bf4178d12-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:05 crc kubenswrapper[4983]: I1125 20:35:05.911315 4983 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb434a7b-12ca-4505-b66c-5d5bf4178d12-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 20:35:06 crc kubenswrapper[4983]: I1125 20:35:06.636148 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" event={"ID":"cb434a7b-12ca-4505-b66c-5d5bf4178d12","Type":"ContainerDied","Data":"1f3be52ae6268a860d138004a4046196a4fb8ccc666c6a2441a6514ea6df9cd5"} Nov 25 20:35:06 crc kubenswrapper[4983]: I1125 20:35:06.636202 4983 scope.go:117] "RemoveContainer" containerID="5eb054a66a1b4fcdd5b233d61cc8cdbe6eb54b449ff848fe77f43f6c0f7cf82d" Nov 25 20:35:06 crc kubenswrapper[4983]: I1125 20:35:06.636204 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gznhv" Nov 25 20:35:06 crc kubenswrapper[4983]: I1125 20:35:06.663869 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gznhv"] Nov 25 20:35:06 crc kubenswrapper[4983]: I1125 20:35:06.667740 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gznhv"] Nov 25 20:35:07 crc kubenswrapper[4983]: I1125 20:35:07.618725 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb434a7b-12ca-4505-b66c-5d5bf4178d12" path="/var/lib/kubelet/pods/cb434a7b-12ca-4505-b66c-5d5bf4178d12/volumes" Nov 25 20:36:39 crc kubenswrapper[4983]: I1125 20:36:39.928117 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:36:39 crc kubenswrapper[4983]: I1125 20:36:39.928984 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.526547 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vgj58"] Nov 25 20:36:44 crc kubenswrapper[4983]: E1125 20:36:44.527246 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb434a7b-12ca-4505-b66c-5d5bf4178d12" containerName="registry" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.527262 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb434a7b-12ca-4505-b66c-5d5bf4178d12" containerName="registry" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.527398 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb434a7b-12ca-4505-b66c-5d5bf4178d12" containerName="registry" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.527926 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.532208 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.532282 4983 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-njkjp" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.532660 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.536059 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vgj58"] Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.541716 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wgvpx"] Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.542473 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wgvpx" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.547129 4983 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lvvxz" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.548215 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-5xb5k"] Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.549089 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.552746 4983 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hkqcz" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.557711 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wgvpx"] Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.564667 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-5xb5k"] Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.664646 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdhsb\" (UniqueName: \"kubernetes.io/projected/fea16ade-51b4-491b-acb2-4a3d5974bf0c-kube-api-access-wdhsb\") pod \"cert-manager-webhook-5655c58dd6-5xb5k\" (UID: \"fea16ade-51b4-491b-acb2-4a3d5974bf0c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.664726 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4sf\" (UniqueName: \"kubernetes.io/projected/3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a-kube-api-access-gd4sf\") pod \"cert-manager-5b446d88c5-wgvpx\" (UID: \"3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a\") " pod="cert-manager/cert-manager-5b446d88c5-wgvpx" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.664793 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvp2g\" (UniqueName: \"kubernetes.io/projected/a84e28f5-6c16-49c9-aaee-2e1ba4b547a3-kube-api-access-rvp2g\") pod \"cert-manager-cainjector-7f985d654d-vgj58\" (UID: \"a84e28f5-6c16-49c9-aaee-2e1ba4b547a3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.765799 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4sf\" (UniqueName: \"kubernetes.io/projected/3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a-kube-api-access-gd4sf\") pod \"cert-manager-5b446d88c5-wgvpx\" (UID: \"3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a\") " pod="cert-manager/cert-manager-5b446d88c5-wgvpx" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.765878 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvp2g\" (UniqueName: \"kubernetes.io/projected/a84e28f5-6c16-49c9-aaee-2e1ba4b547a3-kube-api-access-rvp2g\") pod \"cert-manager-cainjector-7f985d654d-vgj58\" (UID: \"a84e28f5-6c16-49c9-aaee-2e1ba4b547a3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.765935 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdhsb\" (UniqueName: \"kubernetes.io/projected/fea16ade-51b4-491b-acb2-4a3d5974bf0c-kube-api-access-wdhsb\") pod \"cert-manager-webhook-5655c58dd6-5xb5k\" (UID: \"fea16ade-51b4-491b-acb2-4a3d5974bf0c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.785257 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4sf\" (UniqueName: \"kubernetes.io/projected/3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a-kube-api-access-gd4sf\") pod \"cert-manager-5b446d88c5-wgvpx\" (UID: \"3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a\") " pod="cert-manager/cert-manager-5b446d88c5-wgvpx" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.785521 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvp2g\" (UniqueName: \"kubernetes.io/projected/a84e28f5-6c16-49c9-aaee-2e1ba4b547a3-kube-api-access-rvp2g\") pod \"cert-manager-cainjector-7f985d654d-vgj58\" (UID: \"a84e28f5-6c16-49c9-aaee-2e1ba4b547a3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.791116 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdhsb\" (UniqueName: \"kubernetes.io/projected/fea16ade-51b4-491b-acb2-4a3d5974bf0c-kube-api-access-wdhsb\") pod \"cert-manager-webhook-5655c58dd6-5xb5k\" (UID: \"fea16ade-51b4-491b-acb2-4a3d5974bf0c\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.858392 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.871615 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wgvpx" Nov 25 20:36:44 crc kubenswrapper[4983]: I1125 20:36:44.881929 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:45 crc kubenswrapper[4983]: I1125 20:36:45.102265 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vgj58"] Nov 25 20:36:45 crc kubenswrapper[4983]: I1125 20:36:45.114841 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:36:45 crc kubenswrapper[4983]: I1125 20:36:45.259278 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" event={"ID":"a84e28f5-6c16-49c9-aaee-2e1ba4b547a3","Type":"ContainerStarted","Data":"0e88575ede6b0d1861ddc1ff00da2d5cdc964b13fd86b12d91505ff8ad258364"} Nov 25 20:36:45 crc kubenswrapper[4983]: I1125 20:36:45.368167 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wgvpx"] Nov 25 20:36:45 crc kubenswrapper[4983]: W1125 20:36:45.371751 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3603f9e9_5a0e_4283_86a8_4fa4a2b1d98a.slice/crio-f6ebaea7d4607ee75b83f1492a3c0706e7ce97d9209580187323f819d3e15c16 WatchSource:0}: Error finding container f6ebaea7d4607ee75b83f1492a3c0706e7ce97d9209580187323f819d3e15c16: Status 404 returned error can't find the container with id f6ebaea7d4607ee75b83f1492a3c0706e7ce97d9209580187323f819d3e15c16 Nov 25 20:36:45 crc kubenswrapper[4983]: I1125 20:36:45.387837 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-5xb5k"] Nov 25 20:36:45 crc kubenswrapper[4983]: W1125 20:36:45.393935 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfea16ade_51b4_491b_acb2_4a3d5974bf0c.slice/crio-1a667aa997d57510ab7c287b2e99bddc6fbb37827d90b618b152721378d1b72b WatchSource:0}: Error finding container 1a667aa997d57510ab7c287b2e99bddc6fbb37827d90b618b152721378d1b72b: Status 404 returned error can't find the container with id 1a667aa997d57510ab7c287b2e99bddc6fbb37827d90b618b152721378d1b72b Nov 25 20:36:46 crc kubenswrapper[4983]: I1125 20:36:46.267546 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" event={"ID":"fea16ade-51b4-491b-acb2-4a3d5974bf0c","Type":"ContainerStarted","Data":"1a667aa997d57510ab7c287b2e99bddc6fbb37827d90b618b152721378d1b72b"} Nov 25 20:36:46 crc kubenswrapper[4983]: I1125 20:36:46.271289 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wgvpx" event={"ID":"3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a","Type":"ContainerStarted","Data":"f6ebaea7d4607ee75b83f1492a3c0706e7ce97d9209580187323f819d3e15c16"} Nov 25 20:36:47 crc kubenswrapper[4983]: I1125 20:36:47.280747 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" event={"ID":"a84e28f5-6c16-49c9-aaee-2e1ba4b547a3","Type":"ContainerStarted","Data":"861d8e9455caeb3e7c83e20fe3d30dbb24313ea92334173314aac4c46499aeaf"} Nov 25 20:36:47 crc kubenswrapper[4983]: I1125 20:36:47.299209 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-vgj58" podStartSLOduration=1.35617402 podStartE2EDuration="3.299175709s" podCreationTimestamp="2025-11-25 20:36:44 +0000 UTC" firstStartedPulling="2025-11-25 20:36:45.114394755 +0000 UTC m=+586.226928147" lastFinishedPulling="2025-11-25 20:36:47.057396444 +0000 UTC m=+588.169929836" observedRunningTime="2025-11-25 20:36:47.293741344 +0000 UTC m=+588.406274746" watchObservedRunningTime="2025-11-25 20:36:47.299175709 +0000 UTC m=+588.411709101" Nov 25 20:36:48 crc kubenswrapper[4983]: I1125 20:36:48.288360 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" event={"ID":"fea16ade-51b4-491b-acb2-4a3d5974bf0c","Type":"ContainerStarted","Data":"41697e65615dc67b9b604eb222c8d13cf6e369ad05717af7287474d3a922e095"} Nov 25 20:36:48 crc kubenswrapper[4983]: I1125 20:36:48.288423 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:48 crc kubenswrapper[4983]: I1125 20:36:48.310075 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" podStartSLOduration=1.9022109170000001 podStartE2EDuration="4.310055928s" podCreationTimestamp="2025-11-25 20:36:44 +0000 UTC" firstStartedPulling="2025-11-25 20:36:45.396318945 +0000 UTC m=+586.508852337" lastFinishedPulling="2025-11-25 20:36:47.804163956 +0000 UTC m=+588.916697348" observedRunningTime="2025-11-25 20:36:48.306765531 +0000 UTC m=+589.419298923" watchObservedRunningTime="2025-11-25 20:36:48.310055928 +0000 UTC m=+589.422589320" Nov 25 20:36:49 crc kubenswrapper[4983]: I1125 20:36:49.304807 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wgvpx" event={"ID":"3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a","Type":"ContainerStarted","Data":"c995e1a7005294ba36f1f18485871eaf75b3878d741eedb6cf2436e016055600"} Nov 25 20:36:54 crc kubenswrapper[4983]: I1125 20:36:54.884761 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-5xb5k" Nov 25 20:36:54 crc kubenswrapper[4983]: I1125 20:36:54.909752 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-wgvpx" podStartSLOduration=7.724537956 podStartE2EDuration="10.909723441s" podCreationTimestamp="2025-11-25 20:36:44 +0000 UTC" firstStartedPulling="2025-11-25 20:36:45.374423574 +0000 UTC m=+586.486956966" lastFinishedPulling="2025-11-25 20:36:48.559609049 +0000 UTC m=+589.672142451" observedRunningTime="2025-11-25 20:36:49.336583413 +0000 UTC m=+590.449116835" watchObservedRunningTime="2025-11-25 20:36:54.909723441 +0000 UTC m=+596.022256853" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.241811 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4t2p5"] Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.242932 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-controller" containerID="cri-o://3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.243600 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="sbdb" containerID="cri-o://88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.244165 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.244209 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-acl-logging" containerID="cri-o://4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.244532 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-node" containerID="cri-o://1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.244495 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="nbdb" containerID="cri-o://7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.244520 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="northd" containerID="cri-o://7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.284478 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" containerID="cri-o://f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" gracePeriod=30 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.337541 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/2.log" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.338321 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/1.log" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.338378 4983 generic.go:334] "Generic (PLEG): container finished" podID="40e594b9-8aa2-400d-b72e-c36e4523ced3" containerID="e343e37d5bca4b2b04199dde3cd4ec70dfcf0769bf38fefdbeb42bcbc1e18a4f" exitCode=2 Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.338419 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerDied","Data":"e343e37d5bca4b2b04199dde3cd4ec70dfcf0769bf38fefdbeb42bcbc1e18a4f"} Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.338461 4983 scope.go:117] "RemoveContainer" containerID="eb0e5d91873a8170028223fff5efc95aed446bf7add2da7f25fbb9be747f0118" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.339074 4983 scope.go:117] "RemoveContainer" containerID="e343e37d5bca4b2b04199dde3cd4ec70dfcf0769bf38fefdbeb42bcbc1e18a4f" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.339285 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6fkbz_openshift-multus(40e594b9-8aa2-400d-b72e-c36e4523ced3)\"" pod="openshift-multus/multus-6fkbz" podUID="40e594b9-8aa2-400d-b72e-c36e4523ced3" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.551232 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/3.log" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.553520 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovn-acl-logging/0.log" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.554026 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovn-controller/0.log" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.554520 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617332 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pz785"] Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617535 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-acl-logging" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617565 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-acl-logging" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617577 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="sbdb" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617585 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="sbdb" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617598 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617608 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617622 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617631 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617642 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617650 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617659 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617667 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617674 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="northd" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617680 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="northd" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617690 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617696 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617704 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kubecfg-setup" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617710 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kubecfg-setup" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617718 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="nbdb" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617723 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="nbdb" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617731 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617737 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617744 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617749 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: E1125 20:36:55.617759 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-node" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617767 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-node" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617872 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617882 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617893 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="nbdb" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617903 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617912 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-acl-logging" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617920 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617928 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617935 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="kube-rbac-proxy-node" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617943 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="northd" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617951 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovn-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.617957 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="sbdb" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.618128 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerName="ovnkube-controller" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.619753 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687197 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-slash\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687242 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-ovn\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687330 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687388 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-bin\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687419 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687415 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-slash" (OuterVolumeSpecName: "host-slash") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687818 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687865 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-netns\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687895 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687949 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-systemd-units\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687981 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.687988 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-openvswitch\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688029 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688038 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5mng\" (UniqueName: \"kubernetes.io/projected/b577d7b6-2c09-4ed8-8907-36620b2145b2-kube-api-access-d5mng\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688106 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-etc-openvswitch\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688157 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-env-overrides\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688202 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-ovn-kubernetes\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688244 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-node-log\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688323 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-log-socket\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688376 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688433 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688505 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-systemd\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688533 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-kubelet\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688606 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688639 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-netd\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688669 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-var-lib-openvswitch\") pod \"b577d7b6-2c09-4ed8-8907-36620b2145b2\" (UID: \"b577d7b6-2c09-4ed8-8907-36620b2145b2\") " Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688895 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-log-socket" (OuterVolumeSpecName: "log-socket") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688945 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.688976 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689006 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-node-log" (OuterVolumeSpecName: "node-log") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689040 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689073 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovnkube-script-lib\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689127 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzqf\" (UniqueName: \"kubernetes.io/projected/1bd62700-2fe6-440e-b204-7cea099ea3b2-kube-api-access-jlzqf\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689191 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-env-overrides\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689228 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-run-netns\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689259 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689280 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-kubelet\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689322 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-slash\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689382 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovnkube-config\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689372 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689372 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689413 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.689527 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690090 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-etc-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690163 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-cni-netd\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690208 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690237 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-systemd-units\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690284 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovn-node-metrics-cert\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690375 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-var-lib-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690420 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690445 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-log-socket\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690638 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-ovn\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690665 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690838 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-node-log\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690893 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-systemd\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690909 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-cni-bin\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690929 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690984 4983 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.690998 4983 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691010 4983 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691021 4983 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691031 4983 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691041 4983 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691050 4983 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691058 4983 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691068 4983 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691081 4983 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691092 4983 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691102 4983 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691112 4983 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691121 4983 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691144 4983 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691155 4983 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.691165 4983 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.694490 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b577d7b6-2c09-4ed8-8907-36620b2145b2-kube-api-access-d5mng" (OuterVolumeSpecName: "kube-api-access-d5mng") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "kube-api-access-d5mng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.694878 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.708506 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b577d7b6-2c09-4ed8-8907-36620b2145b2" (UID: "b577d7b6-2c09-4ed8-8907-36620b2145b2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.792979 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovnkube-config\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793053 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-etc-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793078 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-cni-netd\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793099 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793121 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-systemd-units\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793138 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovn-node-metrics-cert\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793168 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-var-lib-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793190 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793196 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-run-ovn-kubernetes\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793198 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-cni-netd\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793425 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-etc-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793212 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-log-socket\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793482 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793461 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-var-lib-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793442 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-log-socket\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793519 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-ovn\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793543 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-systemd-units\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793651 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-ovn\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793660 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-node-log\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793683 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-node-log\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793742 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-systemd\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793785 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-cni-bin\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793823 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793870 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovnkube-script-lib\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793927 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlzqf\" (UniqueName: \"kubernetes.io/projected/1bd62700-2fe6-440e-b204-7cea099ea3b2-kube-api-access-jlzqf\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.793969 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-env-overrides\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794014 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-run-netns\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794057 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-kubelet\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794089 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-slash\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794188 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5mng\" (UniqueName: \"kubernetes.io/projected/b577d7b6-2c09-4ed8-8907-36620b2145b2-kube-api-access-d5mng\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794219 4983 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b577d7b6-2c09-4ed8-8907-36620b2145b2-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794242 4983 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b577d7b6-2c09-4ed8-8907-36620b2145b2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794291 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-slash\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794374 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-systemd\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794421 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-cni-bin\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794466 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-run-openvswitch\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794910 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-kubelet\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794964 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovnkube-config\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.794960 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1bd62700-2fe6-440e-b204-7cea099ea3b2-host-run-netns\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.795521 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovnkube-script-lib\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.795841 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1bd62700-2fe6-440e-b204-7cea099ea3b2-env-overrides\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.799356 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1bd62700-2fe6-440e-b204-7cea099ea3b2-ovn-node-metrics-cert\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.815453 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlzqf\" (UniqueName: \"kubernetes.io/projected/1bd62700-2fe6-440e-b204-7cea099ea3b2-kube-api-access-jlzqf\") pod \"ovnkube-node-pz785\" (UID: \"1bd62700-2fe6-440e-b204-7cea099ea3b2\") " pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:55 crc kubenswrapper[4983]: I1125 20:36:55.935579 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.346029 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/2.log" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.349913 4983 generic.go:334] "Generic (PLEG): container finished" podID="1bd62700-2fe6-440e-b204-7cea099ea3b2" containerID="79b0d264957d8081e917ae05855fe57a414af116cab792a2bd6fee370713497d" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.349986 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerDied","Data":"79b0d264957d8081e917ae05855fe57a414af116cab792a2bd6fee370713497d"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.350013 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"1cc9f93268720498e644ddea9126430bef23b267f512b7de172199ce8e6392c9"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.358329 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovnkube-controller/3.log" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.361104 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovn-acl-logging/0.log" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.361787 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4t2p5_b577d7b6-2c09-4ed8-8907-36620b2145b2/ovn-controller/0.log" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362217 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362246 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362257 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362269 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362279 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362289 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" exitCode=0 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362297 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" exitCode=143 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362309 4983 generic.go:334] "Generic (PLEG): container finished" podID="b577d7b6-2c09-4ed8-8907-36620b2145b2" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" exitCode=143 Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362336 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362373 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362392 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362400 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362402 4983 scope.go:117] "RemoveContainer" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362408 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362766 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362796 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362815 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362829 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362838 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362846 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362854 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362863 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362870 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362878 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362885 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362895 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362907 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362916 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362925 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362933 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362941 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362949 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362957 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362964 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362971 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362978 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362988 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.362999 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363009 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363017 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363025 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363033 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363041 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363048 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363056 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363063 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363071 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363081 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4t2p5" event={"ID":"b577d7b6-2c09-4ed8-8907-36620b2145b2","Type":"ContainerDied","Data":"1b837278eb882b6560262fba707494e01871ae9342e996f73ab509ed33207838"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363093 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363122 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363144 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363157 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363164 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363171 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363180 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363188 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363196 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.363204 4983 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.452001 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.477280 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4t2p5"] Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.479042 4983 scope.go:117] "RemoveContainer" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.486784 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4t2p5"] Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.506873 4983 scope.go:117] "RemoveContainer" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.527443 4983 scope.go:117] "RemoveContainer" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.553793 4983 scope.go:117] "RemoveContainer" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.581976 4983 scope.go:117] "RemoveContainer" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.604939 4983 scope.go:117] "RemoveContainer" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.642624 4983 scope.go:117] "RemoveContainer" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.692895 4983 scope.go:117] "RemoveContainer" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.713353 4983 scope.go:117] "RemoveContainer" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.713877 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": container with ID starting with f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771 not found: ID does not exist" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.713908 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} err="failed to get container status \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": rpc error: code = NotFound desc = could not find container \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": container with ID starting with f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.713929 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.714275 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": container with ID starting with 2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277 not found: ID does not exist" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.714297 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} err="failed to get container status \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": rpc error: code = NotFound desc = could not find container \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": container with ID starting with 2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.714318 4983 scope.go:117] "RemoveContainer" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.714575 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": container with ID starting with 88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5 not found: ID does not exist" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.714599 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} err="failed to get container status \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": rpc error: code = NotFound desc = could not find container \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": container with ID starting with 88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.714610 4983 scope.go:117] "RemoveContainer" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.714831 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": container with ID starting with 7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154 not found: ID does not exist" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.714852 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} err="failed to get container status \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": rpc error: code = NotFound desc = could not find container \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": container with ID starting with 7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.714877 4983 scope.go:117] "RemoveContainer" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.715133 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": container with ID starting with 7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e not found: ID does not exist" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.715155 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} err="failed to get container status \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": rpc error: code = NotFound desc = could not find container \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": container with ID starting with 7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.715169 4983 scope.go:117] "RemoveContainer" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.715541 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": container with ID starting with 58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567 not found: ID does not exist" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.715621 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} err="failed to get container status \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": rpc error: code = NotFound desc = could not find container \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": container with ID starting with 58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.715654 4983 scope.go:117] "RemoveContainer" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.715956 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": container with ID starting with 1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba not found: ID does not exist" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.715985 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} err="failed to get container status \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": rpc error: code = NotFound desc = could not find container \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": container with ID starting with 1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.716005 4983 scope.go:117] "RemoveContainer" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.716671 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": container with ID starting with 4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551 not found: ID does not exist" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.716702 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} err="failed to get container status \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": rpc error: code = NotFound desc = could not find container \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": container with ID starting with 4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.716719 4983 scope.go:117] "RemoveContainer" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.716990 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": container with ID starting with 3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f not found: ID does not exist" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.717013 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} err="failed to get container status \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": rpc error: code = NotFound desc = could not find container \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": container with ID starting with 3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.717027 4983 scope.go:117] "RemoveContainer" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" Nov 25 20:36:56 crc kubenswrapper[4983]: E1125 20:36:56.717304 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": container with ID starting with ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6 not found: ID does not exist" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.717326 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} err="failed to get container status \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": rpc error: code = NotFound desc = could not find container \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": container with ID starting with ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.717342 4983 scope.go:117] "RemoveContainer" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.717633 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} err="failed to get container status \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": rpc error: code = NotFound desc = could not find container \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": container with ID starting with f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.717694 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.718109 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} err="failed to get container status \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": rpc error: code = NotFound desc = could not find container \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": container with ID starting with 2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.718137 4983 scope.go:117] "RemoveContainer" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.718426 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} err="failed to get container status \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": rpc error: code = NotFound desc = could not find container \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": container with ID starting with 88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.718454 4983 scope.go:117] "RemoveContainer" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.718773 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} err="failed to get container status \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": rpc error: code = NotFound desc = could not find container \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": container with ID starting with 7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.718800 4983 scope.go:117] "RemoveContainer" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.719206 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} err="failed to get container status \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": rpc error: code = NotFound desc = could not find container \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": container with ID starting with 7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.719273 4983 scope.go:117] "RemoveContainer" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.719806 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} err="failed to get container status \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": rpc error: code = NotFound desc = could not find container \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": container with ID starting with 58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.719829 4983 scope.go:117] "RemoveContainer" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720098 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} err="failed to get container status \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": rpc error: code = NotFound desc = could not find container \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": container with ID starting with 1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720132 4983 scope.go:117] "RemoveContainer" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720365 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} err="failed to get container status \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": rpc error: code = NotFound desc = could not find container \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": container with ID starting with 4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720387 4983 scope.go:117] "RemoveContainer" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720602 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} err="failed to get container status \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": rpc error: code = NotFound desc = could not find container \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": container with ID starting with 3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720625 4983 scope.go:117] "RemoveContainer" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720861 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} err="failed to get container status \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": rpc error: code = NotFound desc = could not find container \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": container with ID starting with ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.720918 4983 scope.go:117] "RemoveContainer" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.721171 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} err="failed to get container status \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": rpc error: code = NotFound desc = could not find container \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": container with ID starting with f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.721197 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.721450 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} err="failed to get container status \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": rpc error: code = NotFound desc = could not find container \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": container with ID starting with 2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.721506 4983 scope.go:117] "RemoveContainer" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.721796 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} err="failed to get container status \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": rpc error: code = NotFound desc = could not find container \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": container with ID starting with 88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.721829 4983 scope.go:117] "RemoveContainer" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722056 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} err="failed to get container status \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": rpc error: code = NotFound desc = could not find container \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": container with ID starting with 7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722079 4983 scope.go:117] "RemoveContainer" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722296 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} err="failed to get container status \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": rpc error: code = NotFound desc = could not find container \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": container with ID starting with 7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722327 4983 scope.go:117] "RemoveContainer" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722653 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} err="failed to get container status \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": rpc error: code = NotFound desc = could not find container \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": container with ID starting with 58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722686 4983 scope.go:117] "RemoveContainer" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722929 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} err="failed to get container status \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": rpc error: code = NotFound desc = could not find container \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": container with ID starting with 1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.722958 4983 scope.go:117] "RemoveContainer" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723158 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} err="failed to get container status \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": rpc error: code = NotFound desc = could not find container \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": container with ID starting with 4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723180 4983 scope.go:117] "RemoveContainer" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723363 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} err="failed to get container status \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": rpc error: code = NotFound desc = could not find container \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": container with ID starting with 3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723387 4983 scope.go:117] "RemoveContainer" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723608 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} err="failed to get container status \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": rpc error: code = NotFound desc = could not find container \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": container with ID starting with ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723630 4983 scope.go:117] "RemoveContainer" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723858 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} err="failed to get container status \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": rpc error: code = NotFound desc = could not find container \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": container with ID starting with f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.723884 4983 scope.go:117] "RemoveContainer" containerID="2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.724127 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277"} err="failed to get container status \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": rpc error: code = NotFound desc = could not find container \"2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277\": container with ID starting with 2ba96e2f53418d4b1d5d08f859c8c113316a39ed4e5736e04ee05bdf52d59277 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.724162 4983 scope.go:117] "RemoveContainer" containerID="88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.724466 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5"} err="failed to get container status \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": rpc error: code = NotFound desc = could not find container \"88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5\": container with ID starting with 88b80bc0e6e0fb5642470e0519747f4732a253f31272726374c8d080bf23aff5 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.724495 4983 scope.go:117] "RemoveContainer" containerID="7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.726095 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154"} err="failed to get container status \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": rpc error: code = NotFound desc = could not find container \"7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154\": container with ID starting with 7d38c6926483f8a51f626ed6b3477dda365009ed90113652e153f8a39c0aa154 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.726133 4983 scope.go:117] "RemoveContainer" containerID="7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.726455 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e"} err="failed to get container status \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": rpc error: code = NotFound desc = could not find container \"7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e\": container with ID starting with 7266d0166c89f93b98ab6a261a87ec39020220e5eb89e1101e34b0a2565d2e1e not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.726480 4983 scope.go:117] "RemoveContainer" containerID="58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.727386 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567"} err="failed to get container status \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": rpc error: code = NotFound desc = could not find container \"58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567\": container with ID starting with 58f44ac3b26c449465d2bd908de835a39ae65edb345c84fe43214aa4e8e6a567 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.727438 4983 scope.go:117] "RemoveContainer" containerID="1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.727783 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba"} err="failed to get container status \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": rpc error: code = NotFound desc = could not find container \"1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba\": container with ID starting with 1711132be96ff298bc2db1c59c5f479e45d868e98ea243bd5cc137fc89ff2dba not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.727813 4983 scope.go:117] "RemoveContainer" containerID="4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.728096 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551"} err="failed to get container status \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": rpc error: code = NotFound desc = could not find container \"4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551\": container with ID starting with 4102dda11f9b38e9c3075745500848b59be4449b213129cd278d683b74cce551 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.728128 4983 scope.go:117] "RemoveContainer" containerID="3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.728377 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f"} err="failed to get container status \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": rpc error: code = NotFound desc = could not find container \"3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f\": container with ID starting with 3f7480af2f8d741361fe3166069c73ac1065c76b7d9be5be070e55755daefe0f not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.728408 4983 scope.go:117] "RemoveContainer" containerID="ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.728675 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6"} err="failed to get container status \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": rpc error: code = NotFound desc = could not find container \"ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6\": container with ID starting with ac51f7ad59e59b325073a6a47661729d34d2f1a075b71636a9b50fe11aaf27e6 not found: ID does not exist" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.728705 4983 scope.go:117] "RemoveContainer" containerID="f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771" Nov 25 20:36:56 crc kubenswrapper[4983]: I1125 20:36:56.729395 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771"} err="failed to get container status \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": rpc error: code = NotFound desc = could not find container \"f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771\": container with ID starting with f3412d4c5b7ef60c414e5ae889f58ce7489fe0b7039fc211fc9aa5c6fd0c3771 not found: ID does not exist" Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.369437 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"b3e83dc3f0e18ef11254ae46f66a5a19e9c996e59e6ba971cdcd46c13a3c7fd5"} Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.369843 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"4cdf1f0a928a11495e632cb263abc2abedee4518a83b51ab4a31a1f4e1676e0e"} Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.369861 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"2e32b4064c63048b686e8275efb57bb5a35b87c6710dc0ad2b8854742f294d40"} Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.369875 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"f9bd19da9f7b7dce073ea0c7edb17800618810afcd553628929780de1b221821"} Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.369887 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"dd2ceee3a112ada1739eb772714d6ef2a35dff7644d397cba4f859103bd4ed7f"} Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.369901 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"a7ff3c9d82cc945195a1d4a1bb44143de06a9b5a0775d3db132293e4780bd0e8"} Nov 25 20:36:57 crc kubenswrapper[4983]: I1125 20:36:57.623989 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b577d7b6-2c09-4ed8-8907-36620b2145b2" path="/var/lib/kubelet/pods/b577d7b6-2c09-4ed8-8907-36620b2145b2/volumes" Nov 25 20:37:00 crc kubenswrapper[4983]: I1125 20:37:00.397917 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"ac1e4f3f0429aee36e63ff1d26a5f3fa9efd4ac4ae87dc8d2d1903780f82613c"} Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.420762 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" event={"ID":"1bd62700-2fe6-440e-b204-7cea099ea3b2","Type":"ContainerStarted","Data":"ba01905d35be2f06371974966c096111657b71e374eb05d4df0303aa8603f052"} Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.421726 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.421753 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.421771 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.453108 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.453874 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:37:02 crc kubenswrapper[4983]: I1125 20:37:02.459609 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" podStartSLOduration=7.459590484 podStartE2EDuration="7.459590484s" podCreationTimestamp="2025-11-25 20:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:37:02.454198281 +0000 UTC m=+603.566731673" watchObservedRunningTime="2025-11-25 20:37:02.459590484 +0000 UTC m=+603.572123886" Nov 25 20:37:09 crc kubenswrapper[4983]: I1125 20:37:09.607655 4983 scope.go:117] "RemoveContainer" containerID="e343e37d5bca4b2b04199dde3cd4ec70dfcf0769bf38fefdbeb42bcbc1e18a4f" Nov 25 20:37:09 crc kubenswrapper[4983]: E1125 20:37:09.609097 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-6fkbz_openshift-multus(40e594b9-8aa2-400d-b72e-c36e4523ced3)\"" pod="openshift-multus/multus-6fkbz" podUID="40e594b9-8aa2-400d-b72e-c36e4523ced3" Nov 25 20:37:09 crc kubenswrapper[4983]: I1125 20:37:09.927352 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:37:09 crc kubenswrapper[4983]: I1125 20:37:09.927434 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:37:24 crc kubenswrapper[4983]: I1125 20:37:24.605150 4983 scope.go:117] "RemoveContainer" containerID="e343e37d5bca4b2b04199dde3cd4ec70dfcf0769bf38fefdbeb42bcbc1e18a4f" Nov 25 20:37:25 crc kubenswrapper[4983]: I1125 20:37:25.570753 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6fkbz_40e594b9-8aa2-400d-b72e-c36e4523ced3/kube-multus/2.log" Nov 25 20:37:25 crc kubenswrapper[4983]: I1125 20:37:25.571384 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6fkbz" event={"ID":"40e594b9-8aa2-400d-b72e-c36e4523ced3","Type":"ContainerStarted","Data":"5baa2442cc85129f5d145a8799310f47f170a18c4f1d6c58278b698f28903aee"} Nov 25 20:37:25 crc kubenswrapper[4983]: I1125 20:37:25.960324 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pz785" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.682256 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd"] Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.685585 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.688529 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.693412 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd"] Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.780759 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snsbv\" (UniqueName: \"kubernetes.io/projected/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-kube-api-access-snsbv\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.780893 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.780956 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.882379 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.882450 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.882526 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snsbv\" (UniqueName: \"kubernetes.io/projected/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-kube-api-access-snsbv\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.882986 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.883356 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:35 crc kubenswrapper[4983]: I1125 20:37:35.904694 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snsbv\" (UniqueName: \"kubernetes.io/projected/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-kube-api-access-snsbv\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:36 crc kubenswrapper[4983]: I1125 20:37:36.006947 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:36 crc kubenswrapper[4983]: I1125 20:37:36.250888 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd"] Nov 25 20:37:36 crc kubenswrapper[4983]: I1125 20:37:36.645202 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" event={"ID":"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728","Type":"ContainerStarted","Data":"ba0c9434086ad1c9f8695d1a50341527004f5fb8332384bed26675b68e70f95d"} Nov 25 20:37:36 crc kubenswrapper[4983]: I1125 20:37:36.645768 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" event={"ID":"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728","Type":"ContainerStarted","Data":"2664f2991f8d407283fe875a697b314a6f426c0404ba9652e10b1da83a1946cc"} Nov 25 20:37:37 crc kubenswrapper[4983]: I1125 20:37:37.650284 4983 generic.go:334] "Generic (PLEG): container finished" podID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerID="ba0c9434086ad1c9f8695d1a50341527004f5fb8332384bed26675b68e70f95d" exitCode=0 Nov 25 20:37:37 crc kubenswrapper[4983]: I1125 20:37:37.650325 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" event={"ID":"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728","Type":"ContainerDied","Data":"ba0c9434086ad1c9f8695d1a50341527004f5fb8332384bed26675b68e70f95d"} Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.668523 4983 generic.go:334] "Generic (PLEG): container finished" podID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerID="e1589a883b0370dc406b45f7a4452adb6d64e95c03ff54d74c5bf26434f439a4" exitCode=0 Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.668698 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" event={"ID":"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728","Type":"ContainerDied","Data":"e1589a883b0370dc406b45f7a4452adb6d64e95c03ff54d74c5bf26434f439a4"} Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.927296 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.927856 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.927939 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.928995 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7306555a4508b1828e5cf4831dc81aad7a61440dcfa7cbd1e1c973af6958d2b0"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:37:39 crc kubenswrapper[4983]: I1125 20:37:39.929110 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://7306555a4508b1828e5cf4831dc81aad7a61440dcfa7cbd1e1c973af6958d2b0" gracePeriod=600 Nov 25 20:37:40 crc kubenswrapper[4983]: I1125 20:37:40.680183 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="7306555a4508b1828e5cf4831dc81aad7a61440dcfa7cbd1e1c973af6958d2b0" exitCode=0 Nov 25 20:37:40 crc kubenswrapper[4983]: I1125 20:37:40.680238 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"7306555a4508b1828e5cf4831dc81aad7a61440dcfa7cbd1e1c973af6958d2b0"} Nov 25 20:37:40 crc kubenswrapper[4983]: I1125 20:37:40.680728 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"332f27d6dcaee6d6f56ec3302fd09a3529205e5c94a5a306755d9476fe03353d"} Nov 25 20:37:40 crc kubenswrapper[4983]: I1125 20:37:40.680748 4983 scope.go:117] "RemoveContainer" containerID="cedbe1d0d40fe4f150b02eabc08807db470ed60486ee4e83fdd5c11bc49792fa" Nov 25 20:37:40 crc kubenswrapper[4983]: I1125 20:37:40.684895 4983 generic.go:334] "Generic (PLEG): container finished" podID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerID="e2e38eb4e0e0feec3368bb3fc4f7f61cdad85b81b947583c56dfbdc4a89b4b86" exitCode=0 Nov 25 20:37:40 crc kubenswrapper[4983]: I1125 20:37:40.684936 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" event={"ID":"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728","Type":"ContainerDied","Data":"e2e38eb4e0e0feec3368bb3fc4f7f61cdad85b81b947583c56dfbdc4a89b4b86"} Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.031447 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.170789 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-bundle\") pod \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.170888 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snsbv\" (UniqueName: \"kubernetes.io/projected/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-kube-api-access-snsbv\") pod \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.171045 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-util\") pod \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\" (UID: \"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728\") " Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.173476 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-bundle" (OuterVolumeSpecName: "bundle") pod "c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" (UID: "c9f48e6f-bd8d-4373-a680-4bf6a3ac8728"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.183038 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-kube-api-access-snsbv" (OuterVolumeSpecName: "kube-api-access-snsbv") pod "c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" (UID: "c9f48e6f-bd8d-4373-a680-4bf6a3ac8728"). InnerVolumeSpecName "kube-api-access-snsbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.272771 4983 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.273382 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snsbv\" (UniqueName: \"kubernetes.io/projected/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-kube-api-access-snsbv\") on node \"crc\" DevicePath \"\"" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.276221 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-util" (OuterVolumeSpecName: "util") pod "c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" (UID: "c9f48e6f-bd8d-4373-a680-4bf6a3ac8728"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.374588 4983 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c9f48e6f-bd8d-4373-a680-4bf6a3ac8728-util\") on node \"crc\" DevicePath \"\"" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.708219 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" event={"ID":"c9f48e6f-bd8d-4373-a680-4bf6a3ac8728","Type":"ContainerDied","Data":"2664f2991f8d407283fe875a697b314a6f426c0404ba9652e10b1da83a1946cc"} Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.708300 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2664f2991f8d407283fe875a697b314a6f426c0404ba9652e10b1da83a1946cc" Nov 25 20:37:42 crc kubenswrapper[4983]: I1125 20:37:42.708442 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.751137 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-5tmqq"] Nov 25 20:37:44 crc kubenswrapper[4983]: E1125 20:37:44.752927 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="util" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.753034 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="util" Nov 25 20:37:44 crc kubenswrapper[4983]: E1125 20:37:44.753098 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="extract" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.753152 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="extract" Nov 25 20:37:44 crc kubenswrapper[4983]: E1125 20:37:44.753208 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="pull" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.753262 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="pull" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.753414 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f48e6f-bd8d-4373-a680-4bf6a3ac8728" containerName="extract" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.754017 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.757830 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-blkf6" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.758130 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.758281 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.765187 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-5tmqq"] Nov 25 20:37:44 crc kubenswrapper[4983]: I1125 20:37:44.914827 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np7m2\" (UniqueName: \"kubernetes.io/projected/fdab2d13-eec3-468a-b383-e6bc7e00849f-kube-api-access-np7m2\") pod \"nmstate-operator-557fdffb88-5tmqq\" (UID: \"fdab2d13-eec3-468a-b383-e6bc7e00849f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" Nov 25 20:37:45 crc kubenswrapper[4983]: I1125 20:37:45.016105 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np7m2\" (UniqueName: \"kubernetes.io/projected/fdab2d13-eec3-468a-b383-e6bc7e00849f-kube-api-access-np7m2\") pod \"nmstate-operator-557fdffb88-5tmqq\" (UID: \"fdab2d13-eec3-468a-b383-e6bc7e00849f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" Nov 25 20:37:45 crc kubenswrapper[4983]: I1125 20:37:45.035270 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np7m2\" (UniqueName: \"kubernetes.io/projected/fdab2d13-eec3-468a-b383-e6bc7e00849f-kube-api-access-np7m2\") pod \"nmstate-operator-557fdffb88-5tmqq\" (UID: \"fdab2d13-eec3-468a-b383-e6bc7e00849f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" Nov 25 20:37:45 crc kubenswrapper[4983]: I1125 20:37:45.075255 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" Nov 25 20:37:45 crc kubenswrapper[4983]: I1125 20:37:45.306422 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-5tmqq"] Nov 25 20:37:45 crc kubenswrapper[4983]: W1125 20:37:45.317313 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdab2d13_eec3_468a_b383_e6bc7e00849f.slice/crio-14962b426c0c98935a6d40fe982733a83f82e04aa4352e90febedcb4e9dc95a5 WatchSource:0}: Error finding container 14962b426c0c98935a6d40fe982733a83f82e04aa4352e90febedcb4e9dc95a5: Status 404 returned error can't find the container with id 14962b426c0c98935a6d40fe982733a83f82e04aa4352e90febedcb4e9dc95a5 Nov 25 20:37:45 crc kubenswrapper[4983]: I1125 20:37:45.740300 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" event={"ID":"fdab2d13-eec3-468a-b383-e6bc7e00849f","Type":"ContainerStarted","Data":"14962b426c0c98935a6d40fe982733a83f82e04aa4352e90febedcb4e9dc95a5"} Nov 25 20:37:47 crc kubenswrapper[4983]: I1125 20:37:47.751822 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" event={"ID":"fdab2d13-eec3-468a-b383-e6bc7e00849f","Type":"ContainerStarted","Data":"041e33eb989be13ad6410a5a24dd4ef509bfd26f5380f9b7958e65dbef7eaf16"} Nov 25 20:37:47 crc kubenswrapper[4983]: I1125 20:37:47.777686 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-5tmqq" podStartSLOduration=1.796344172 podStartE2EDuration="3.777665578s" podCreationTimestamp="2025-11-25 20:37:44 +0000 UTC" firstStartedPulling="2025-11-25 20:37:45.319859191 +0000 UTC m=+646.432392583" lastFinishedPulling="2025-11-25 20:37:47.301180587 +0000 UTC m=+648.413713989" observedRunningTime="2025-11-25 20:37:47.775750357 +0000 UTC m=+648.888283749" watchObservedRunningTime="2025-11-25 20:37:47.777665578 +0000 UTC m=+648.890198970" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.879992 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6"] Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.882988 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.886137 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-szl5n" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.889111 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd"] Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.889963 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.891483 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.899481 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd"] Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.907604 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q89j2"] Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.908384 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.926307 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6"] Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.967988 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.968059 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsf8m\" (UniqueName: \"kubernetes.io/projected/ab060c60-6a98-4358-a028-e3600d0239f4-kube-api-access-zsf8m\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.968094 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbcf\" (UniqueName: \"kubernetes.io/projected/1064f79e-2d97-4733-a2f2-f5f96b204825-kube-api-access-9rbcf\") pod \"nmstate-metrics-5dcf9c57c5-gz8k6\" (UID: \"1064f79e-2d97-4733-a2f2-f5f96b204825\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.968116 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-nmstate-lock\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.968141 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-dbus-socket\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.968177 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-ovs-socket\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:55 crc kubenswrapper[4983]: I1125 20:37:55.968203 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6972\" (UniqueName: \"kubernetes.io/projected/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-kube-api-access-j6972\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.035634 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw"] Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.036286 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.038530 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bcf8p" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.039110 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.039252 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.045233 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw"] Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069275 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsf8m\" (UniqueName: \"kubernetes.io/projected/ab060c60-6a98-4358-a028-e3600d0239f4-kube-api-access-zsf8m\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069319 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdedcf78-6faf-457d-8817-2d87dc07b913-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069344 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rbcf\" (UniqueName: \"kubernetes.io/projected/1064f79e-2d97-4733-a2f2-f5f96b204825-kube-api-access-9rbcf\") pod \"nmstate-metrics-5dcf9c57c5-gz8k6\" (UID: \"1064f79e-2d97-4733-a2f2-f5f96b204825\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069365 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-nmstate-lock\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069387 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cdedcf78-6faf-457d-8817-2d87dc07b913-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069404 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-dbus-socket\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069430 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcrn\" (UniqueName: \"kubernetes.io/projected/cdedcf78-6faf-457d-8817-2d87dc07b913-kube-api-access-vvcrn\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069452 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-ovs-socket\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069474 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6972\" (UniqueName: \"kubernetes.io/projected/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-kube-api-access-j6972\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069482 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-nmstate-lock\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069502 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069670 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-ovs-socket\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.069784 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ab060c60-6a98-4358-a028-e3600d0239f4-dbus-socket\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: E1125 20:37:56.069792 4983 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 25 20:37:56 crc kubenswrapper[4983]: E1125 20:37:56.069832 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-tls-key-pair podName:ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4 nodeName:}" failed. No retries permitted until 2025-11-25 20:37:56.569819039 +0000 UTC m=+657.682352431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-tls-key-pair") pod "nmstate-webhook-6b89b748d8-7kqbd" (UID: "ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4") : secret "openshift-nmstate-webhook" not found Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.092777 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsf8m\" (UniqueName: \"kubernetes.io/projected/ab060c60-6a98-4358-a028-e3600d0239f4-kube-api-access-zsf8m\") pod \"nmstate-handler-q89j2\" (UID: \"ab060c60-6a98-4358-a028-e3600d0239f4\") " pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.093491 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6972\" (UniqueName: \"kubernetes.io/projected/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-kube-api-access-j6972\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.101492 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rbcf\" (UniqueName: \"kubernetes.io/projected/1064f79e-2d97-4733-a2f2-f5f96b204825-kube-api-access-9rbcf\") pod \"nmstate-metrics-5dcf9c57c5-gz8k6\" (UID: \"1064f79e-2d97-4733-a2f2-f5f96b204825\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.170569 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcrn\" (UniqueName: \"kubernetes.io/projected/cdedcf78-6faf-457d-8817-2d87dc07b913-kube-api-access-vvcrn\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.170958 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdedcf78-6faf-457d-8817-2d87dc07b913-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.170991 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cdedcf78-6faf-457d-8817-2d87dc07b913-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: E1125 20:37:56.171107 4983 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 25 20:37:56 crc kubenswrapper[4983]: E1125 20:37:56.171172 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdedcf78-6faf-457d-8817-2d87dc07b913-plugin-serving-cert podName:cdedcf78-6faf-457d-8817-2d87dc07b913 nodeName:}" failed. No retries permitted until 2025-11-25 20:37:56.671154078 +0000 UTC m=+657.783687470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/cdedcf78-6faf-457d-8817-2d87dc07b913-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-fkqdw" (UID: "cdedcf78-6faf-457d-8817-2d87dc07b913") : secret "plugin-serving-cert" not found Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.171987 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cdedcf78-6faf-457d-8817-2d87dc07b913-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.192472 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcrn\" (UniqueName: \"kubernetes.io/projected/cdedcf78-6faf-457d-8817-2d87dc07b913-kube-api-access-vvcrn\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.212349 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.226879 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-944dc48f6-khl25"] Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.227519 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.249988 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-944dc48f6-khl25"] Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.255913 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275119 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-config\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275168 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-serving-cert\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275211 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-trusted-ca-bundle\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275263 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-oauth-serving-cert\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275290 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-service-ca\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275311 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-oauth-config\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.275331 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw77t\" (UniqueName: \"kubernetes.io/projected/58e8c353-ad46-4987-af8f-f57a9fa548f5-kube-api-access-zw77t\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.376578 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-trusted-ca-bundle\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.377145 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-oauth-serving-cert\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.377171 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-service-ca\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.377191 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-oauth-config\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.377211 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw77t\" (UniqueName: \"kubernetes.io/projected/58e8c353-ad46-4987-af8f-f57a9fa548f5-kube-api-access-zw77t\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.377234 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-config\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.377251 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-serving-cert\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.380805 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-trusted-ca-bundle\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.382512 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-oauth-serving-cert\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.382698 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-service-ca\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.383426 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-config\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.385320 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-oauth-config\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.387489 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/58e8c353-ad46-4987-af8f-f57a9fa548f5-console-serving-cert\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.399725 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw77t\" (UniqueName: \"kubernetes.io/projected/58e8c353-ad46-4987-af8f-f57a9fa548f5-kube-api-access-zw77t\") pod \"console-944dc48f6-khl25\" (UID: \"58e8c353-ad46-4987-af8f-f57a9fa548f5\") " pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.579967 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.582898 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-7kqbd\" (UID: \"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.591592 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.649828 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6"] Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.681581 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdedcf78-6faf-457d-8817-2d87dc07b913-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.685699 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdedcf78-6faf-457d-8817-2d87dc07b913-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-fkqdw\" (UID: \"cdedcf78-6faf-457d-8817-2d87dc07b913\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.770502 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-944dc48f6-khl25"] Nov 25 20:37:56 crc kubenswrapper[4983]: W1125 20:37:56.777420 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58e8c353_ad46_4987_af8f_f57a9fa548f5.slice/crio-c318c819d826028a5234f4bf749867dab69f2d52cb9528127466e7ec5d3f1e82 WatchSource:0}: Error finding container c318c819d826028a5234f4bf749867dab69f2d52cb9528127466e7ec5d3f1e82: Status 404 returned error can't find the container with id c318c819d826028a5234f4bf749867dab69f2d52cb9528127466e7ec5d3f1e82 Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.802111 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q89j2" event={"ID":"ab060c60-6a98-4358-a028-e3600d0239f4","Type":"ContainerStarted","Data":"6fa8b2cf77a2d393e52e724c3c8e72678ed72400e4c3c23bf6667745dd3c757e"} Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.803075 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" event={"ID":"1064f79e-2d97-4733-a2f2-f5f96b204825","Type":"ContainerStarted","Data":"3398f3fb602c1e13a8342244b0e4a9fd36fd075c601ce4cb174513aebdae35ba"} Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.804068 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-944dc48f6-khl25" event={"ID":"58e8c353-ad46-4987-af8f-f57a9fa548f5","Type":"ContainerStarted","Data":"c318c819d826028a5234f4bf749867dab69f2d52cb9528127466e7ec5d3f1e82"} Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.833296 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:56 crc kubenswrapper[4983]: I1125 20:37:56.974347 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" Nov 25 20:37:57 crc kubenswrapper[4983]: I1125 20:37:57.018654 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd"] Nov 25 20:37:57 crc kubenswrapper[4983]: W1125 20:37:57.034163 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce76eb4b_37f0_4067_a4d2_34a1b8e0b6a4.slice/crio-c94c9d636ba3db4cba48f5f78c723ff7a3b887fd5dcbf6d182f36cafda78ddea WatchSource:0}: Error finding container c94c9d636ba3db4cba48f5f78c723ff7a3b887fd5dcbf6d182f36cafda78ddea: Status 404 returned error can't find the container with id c94c9d636ba3db4cba48f5f78c723ff7a3b887fd5dcbf6d182f36cafda78ddea Nov 25 20:37:57 crc kubenswrapper[4983]: I1125 20:37:57.158763 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw"] Nov 25 20:37:57 crc kubenswrapper[4983]: I1125 20:37:57.811289 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" event={"ID":"cdedcf78-6faf-457d-8817-2d87dc07b913","Type":"ContainerStarted","Data":"4e81a8ca0b5e2cbb9bfa110920906c73044d759e808f5e1d36c28be6e89eb281"} Nov 25 20:37:57 crc kubenswrapper[4983]: I1125 20:37:57.813618 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" event={"ID":"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4","Type":"ContainerStarted","Data":"c94c9d636ba3db4cba48f5f78c723ff7a3b887fd5dcbf6d182f36cafda78ddea"} Nov 25 20:37:57 crc kubenswrapper[4983]: I1125 20:37:57.816511 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-944dc48f6-khl25" event={"ID":"58e8c353-ad46-4987-af8f-f57a9fa548f5","Type":"ContainerStarted","Data":"84347bd82b628e260107812711791033ad2cf2b3574bdd8d6773b77ac23027bf"} Nov 25 20:37:57 crc kubenswrapper[4983]: I1125 20:37:57.834931 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-944dc48f6-khl25" podStartSLOduration=1.834911998 podStartE2EDuration="1.834911998s" podCreationTimestamp="2025-11-25 20:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:37:57.833235273 +0000 UTC m=+658.945768665" watchObservedRunningTime="2025-11-25 20:37:57.834911998 +0000 UTC m=+658.947445390" Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.823250 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" event={"ID":"ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4","Type":"ContainerStarted","Data":"2713641a1bfddeff769b833456aeb5ce9d49215bb8afff3298821017b136d4ea"} Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.823694 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.825142 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q89j2" event={"ID":"ab060c60-6a98-4358-a028-e3600d0239f4","Type":"ContainerStarted","Data":"fdc86d37430812f2eab99c0bd06a7e30d9f1657e6b18f5e7f0b49ebf9598db9a"} Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.825823 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.836346 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" event={"ID":"1064f79e-2d97-4733-a2f2-f5f96b204825","Type":"ContainerStarted","Data":"e3ca89fe95efdd02257b95fa5e694e769b0be70be6644cdd20b94bb6b130f66c"} Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.850726 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" podStartSLOduration=2.477133245 podStartE2EDuration="3.850701306s" podCreationTimestamp="2025-11-25 20:37:55 +0000 UTC" firstStartedPulling="2025-11-25 20:37:57.036912676 +0000 UTC m=+658.149446068" lastFinishedPulling="2025-11-25 20:37:58.410480737 +0000 UTC m=+659.523014129" observedRunningTime="2025-11-25 20:37:58.841230225 +0000 UTC m=+659.953763677" watchObservedRunningTime="2025-11-25 20:37:58.850701306 +0000 UTC m=+659.963234738" Nov 25 20:37:58 crc kubenswrapper[4983]: I1125 20:37:58.866985 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q89j2" podStartSLOduration=1.748558426 podStartE2EDuration="3.866961947s" podCreationTimestamp="2025-11-25 20:37:55 +0000 UTC" firstStartedPulling="2025-11-25 20:37:56.27867336 +0000 UTC m=+657.391206752" lastFinishedPulling="2025-11-25 20:37:58.397076851 +0000 UTC m=+659.509610273" observedRunningTime="2025-11-25 20:37:58.865748795 +0000 UTC m=+659.978282227" watchObservedRunningTime="2025-11-25 20:37:58.866961947 +0000 UTC m=+659.979495339" Nov 25 20:37:59 crc kubenswrapper[4983]: I1125 20:37:59.844898 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" event={"ID":"cdedcf78-6faf-457d-8817-2d87dc07b913","Type":"ContainerStarted","Data":"3db0e66d1efc5fc96bd3bcaff73130d71629828958f870fe0c96ae033d0544d5"} Nov 25 20:37:59 crc kubenswrapper[4983]: I1125 20:37:59.860625 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-fkqdw" podStartSLOduration=1.649446607 podStartE2EDuration="3.860609329s" podCreationTimestamp="2025-11-25 20:37:56 +0000 UTC" firstStartedPulling="2025-11-25 20:37:57.166006271 +0000 UTC m=+658.278539663" lastFinishedPulling="2025-11-25 20:37:59.377168993 +0000 UTC m=+660.489702385" observedRunningTime="2025-11-25 20:37:59.860013443 +0000 UTC m=+660.972546835" watchObservedRunningTime="2025-11-25 20:37:59.860609329 +0000 UTC m=+660.973142721" Nov 25 20:38:00 crc kubenswrapper[4983]: I1125 20:38:00.851526 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" event={"ID":"1064f79e-2d97-4733-a2f2-f5f96b204825","Type":"ContainerStarted","Data":"4614a697f5827689414128475f6f3cceb3c29d986628fb481a997d552d29225f"} Nov 25 20:38:00 crc kubenswrapper[4983]: I1125 20:38:00.875320 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gz8k6" podStartSLOduration=1.9585407369999999 podStartE2EDuration="5.875295199s" podCreationTimestamp="2025-11-25 20:37:55 +0000 UTC" firstStartedPulling="2025-11-25 20:37:56.658080116 +0000 UTC m=+657.770613508" lastFinishedPulling="2025-11-25 20:38:00.574834538 +0000 UTC m=+661.687367970" observedRunningTime="2025-11-25 20:38:00.869118035 +0000 UTC m=+661.981651417" watchObservedRunningTime="2025-11-25 20:38:00.875295199 +0000 UTC m=+661.987828631" Nov 25 20:38:06 crc kubenswrapper[4983]: I1125 20:38:06.295417 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q89j2" Nov 25 20:38:06 crc kubenswrapper[4983]: I1125 20:38:06.591843 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:38:06 crc kubenswrapper[4983]: I1125 20:38:06.592451 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:38:06 crc kubenswrapper[4983]: I1125 20:38:06.600542 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:38:06 crc kubenswrapper[4983]: I1125 20:38:06.920458 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-944dc48f6-khl25" Nov 25 20:38:06 crc kubenswrapper[4983]: I1125 20:38:06.991764 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g8bfq"] Nov 25 20:38:16 crc kubenswrapper[4983]: I1125 20:38:16.841417 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-7kqbd" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.623318 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z"] Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.627140 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.633495 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.646698 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z"] Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.673453 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.673545 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/bcfc0074-c596-4345-9cd1-caada40895be-kube-api-access-w6q7f\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.673622 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.774276 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.774335 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/bcfc0074-c596-4345-9cd1-caada40895be-kube-api-access-w6q7f\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.774387 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.774878 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.775180 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.800644 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/bcfc0074-c596-4345-9cd1-caada40895be-kube-api-access-w6q7f\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:29 crc kubenswrapper[4983]: I1125 20:38:29.943264 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:30 crc kubenswrapper[4983]: I1125 20:38:30.229277 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z"] Nov 25 20:38:31 crc kubenswrapper[4983]: I1125 20:38:31.091506 4983 generic.go:334] "Generic (PLEG): container finished" podID="bcfc0074-c596-4345-9cd1-caada40895be" containerID="5c2f084bbc136d8ef089f27b6835090f1b2ea7a76679e29385ab8eebf3534952" exitCode=0 Nov 25 20:38:31 crc kubenswrapper[4983]: I1125 20:38:31.091792 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" event={"ID":"bcfc0074-c596-4345-9cd1-caada40895be","Type":"ContainerDied","Data":"5c2f084bbc136d8ef089f27b6835090f1b2ea7a76679e29385ab8eebf3534952"} Nov 25 20:38:31 crc kubenswrapper[4983]: I1125 20:38:31.092004 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" event={"ID":"bcfc0074-c596-4345-9cd1-caada40895be","Type":"ContainerStarted","Data":"e5a0780c0efde063ebbf5a5c1d30b17c8d24356289269c6592fa976987add4b4"} Nov 25 20:38:32 crc kubenswrapper[4983]: I1125 20:38:32.058785 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g8bfq" podUID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" containerName="console" containerID="cri-o://ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6" gracePeriod=15 Nov 25 20:38:32 crc kubenswrapper[4983]: I1125 20:38:32.968769 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g8bfq_06dff288-ef5e-4a4a-88e5-ce25c216ee5a/console/0.log" Nov 25 20:38:32 crc kubenswrapper[4983]: I1125 20:38:32.969350 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.106028 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g8bfq_06dff288-ef5e-4a4a-88e5-ce25c216ee5a/console/0.log" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.106371 4983 generic.go:334] "Generic (PLEG): container finished" podID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" containerID="ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6" exitCode=2 Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.106438 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g8bfq" event={"ID":"06dff288-ef5e-4a4a-88e5-ce25c216ee5a","Type":"ContainerDied","Data":"ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6"} Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.106470 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g8bfq" event={"ID":"06dff288-ef5e-4a4a-88e5-ce25c216ee5a","Type":"ContainerDied","Data":"a4804018aca04da69ef9d4861dee01f19c6efc5a124f905c0c68ce7f931b47ef"} Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.106493 4983 scope.go:117] "RemoveContainer" containerID="ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.106482 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g8bfq" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.109518 4983 generic.go:334] "Generic (PLEG): container finished" podID="bcfc0074-c596-4345-9cd1-caada40895be" containerID="aa21c00d581cdfc3116bc7a382c279dc2cac9490cd4650d4b85644e88370165a" exitCode=0 Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.109599 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" event={"ID":"bcfc0074-c596-4345-9cd1-caada40895be","Type":"ContainerDied","Data":"aa21c00d581cdfc3116bc7a382c279dc2cac9490cd4650d4b85644e88370165a"} Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.126652 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-oauth-serving-cert\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.126718 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-oauth-config\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.126753 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-service-ca\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.126803 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lpxk\" (UniqueName: \"kubernetes.io/projected/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-kube-api-access-2lpxk\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.126864 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-trusted-ca-bundle\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.127216 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-serving-cert\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.127272 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-config\") pod \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\" (UID: \"06dff288-ef5e-4a4a-88e5-ce25c216ee5a\") " Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.127213 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.127479 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.127689 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-service-ca" (OuterVolumeSpecName: "service-ca") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.128278 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-config" (OuterVolumeSpecName: "console-config") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.133793 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.133875 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.134434 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-kube-api-access-2lpxk" (OuterVolumeSpecName: "kube-api-access-2lpxk") pod "06dff288-ef5e-4a4a-88e5-ce25c216ee5a" (UID: "06dff288-ef5e-4a4a-88e5-ce25c216ee5a"). InnerVolumeSpecName "kube-api-access-2lpxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.138646 4983 scope.go:117] "RemoveContainer" containerID="ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6" Nov 25 20:38:33 crc kubenswrapper[4983]: E1125 20:38:33.142978 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6\": container with ID starting with ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6 not found: ID does not exist" containerID="ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.143013 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6"} err="failed to get container status \"ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6\": rpc error: code = NotFound desc = could not find container \"ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6\": container with ID starting with ba613656d21c2e6111054c6f9536782b3b71547ef4795377442fa03fa7a78bc6 not found: ID does not exist" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.228937 4983 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.228973 4983 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.228983 4983 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.228992 4983 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.229000 4983 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.229009 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lpxk\" (UniqueName: \"kubernetes.io/projected/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-kube-api-access-2lpxk\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.229018 4983 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06dff288-ef5e-4a4a-88e5-ce25c216ee5a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.436937 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g8bfq"] Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.436995 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g8bfq"] Nov 25 20:38:33 crc kubenswrapper[4983]: I1125 20:38:33.615404 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" path="/var/lib/kubelet/pods/06dff288-ef5e-4a4a-88e5-ce25c216ee5a/volumes" Nov 25 20:38:34 crc kubenswrapper[4983]: I1125 20:38:34.118991 4983 generic.go:334] "Generic (PLEG): container finished" podID="bcfc0074-c596-4345-9cd1-caada40895be" containerID="3ac03fa62e33e4beddf4d37580beb41a384a3e520bfdfef04e22b8cda4607d2b" exitCode=0 Nov 25 20:38:34 crc kubenswrapper[4983]: I1125 20:38:34.119065 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" event={"ID":"bcfc0074-c596-4345-9cd1-caada40895be","Type":"ContainerDied","Data":"3ac03fa62e33e4beddf4d37580beb41a384a3e520bfdfef04e22b8cda4607d2b"} Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.345666 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.360838 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-util\") pod \"bcfc0074-c596-4345-9cd1-caada40895be\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.378078 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-util" (OuterVolumeSpecName: "util") pod "bcfc0074-c596-4345-9cd1-caada40895be" (UID: "bcfc0074-c596-4345-9cd1-caada40895be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.462544 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/bcfc0074-c596-4345-9cd1-caada40895be-kube-api-access-w6q7f\") pod \"bcfc0074-c596-4345-9cd1-caada40895be\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.462637 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-bundle\") pod \"bcfc0074-c596-4345-9cd1-caada40895be\" (UID: \"bcfc0074-c596-4345-9cd1-caada40895be\") " Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.463016 4983 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-util\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.463883 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-bundle" (OuterVolumeSpecName: "bundle") pod "bcfc0074-c596-4345-9cd1-caada40895be" (UID: "bcfc0074-c596-4345-9cd1-caada40895be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.468503 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcfc0074-c596-4345-9cd1-caada40895be-kube-api-access-w6q7f" (OuterVolumeSpecName: "kube-api-access-w6q7f") pod "bcfc0074-c596-4345-9cd1-caada40895be" (UID: "bcfc0074-c596-4345-9cd1-caada40895be"). InnerVolumeSpecName "kube-api-access-w6q7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.564288 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6q7f\" (UniqueName: \"kubernetes.io/projected/bcfc0074-c596-4345-9cd1-caada40895be-kube-api-access-w6q7f\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:35 crc kubenswrapper[4983]: I1125 20:38:35.564366 4983 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcfc0074-c596-4345-9cd1-caada40895be-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:38:36 crc kubenswrapper[4983]: I1125 20:38:36.134171 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" event={"ID":"bcfc0074-c596-4345-9cd1-caada40895be","Type":"ContainerDied","Data":"e5a0780c0efde063ebbf5a5c1d30b17c8d24356289269c6592fa976987add4b4"} Nov 25 20:38:36 crc kubenswrapper[4983]: I1125 20:38:36.134764 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z" Nov 25 20:38:36 crc kubenswrapper[4983]: I1125 20:38:36.134793 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a0780c0efde063ebbf5a5c1d30b17c8d24356289269c6592fa976987add4b4" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.888737 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj"] Nov 25 20:38:44 crc kubenswrapper[4983]: E1125 20:38:44.889609 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" containerName="console" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.889621 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" containerName="console" Nov 25 20:38:44 crc kubenswrapper[4983]: E1125 20:38:44.889633 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="util" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.889639 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="util" Nov 25 20:38:44 crc kubenswrapper[4983]: E1125 20:38:44.889650 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="pull" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.889657 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="pull" Nov 25 20:38:44 crc kubenswrapper[4983]: E1125 20:38:44.889666 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="extract" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.889687 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="extract" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.889785 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="06dff288-ef5e-4a4a-88e5-ce25c216ee5a" containerName="console" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.889794 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcfc0074-c596-4345-9cd1-caada40895be" containerName="extract" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.890138 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.892679 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.893080 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.893502 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.893667 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.895346 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-65kd8" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.906851 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj"] Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.987137 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74baeb7c-21f0-4d1c-9a61-7694f59cc161-apiservice-cert\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.987275 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-447cx\" (UniqueName: \"kubernetes.io/projected/74baeb7c-21f0-4d1c-9a61-7694f59cc161-kube-api-access-447cx\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:44 crc kubenswrapper[4983]: I1125 20:38:44.987334 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74baeb7c-21f0-4d1c-9a61-7694f59cc161-webhook-cert\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.088113 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-447cx\" (UniqueName: \"kubernetes.io/projected/74baeb7c-21f0-4d1c-9a61-7694f59cc161-kube-api-access-447cx\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.088195 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74baeb7c-21f0-4d1c-9a61-7694f59cc161-webhook-cert\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.088238 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74baeb7c-21f0-4d1c-9a61-7694f59cc161-apiservice-cert\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.095015 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/74baeb7c-21f0-4d1c-9a61-7694f59cc161-webhook-cert\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.095198 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/74baeb7c-21f0-4d1c-9a61-7694f59cc161-apiservice-cert\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.112805 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-447cx\" (UniqueName: \"kubernetes.io/projected/74baeb7c-21f0-4d1c-9a61-7694f59cc161-kube-api-access-447cx\") pod \"metallb-operator-controller-manager-6dcc87d69d-p8fwj\" (UID: \"74baeb7c-21f0-4d1c-9a61-7694f59cc161\") " pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.204845 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.276380 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s"] Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.277089 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.279250 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.279606 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.280175 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dcmsm" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.290626 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/668e90f8-b352-4ad3-8965-1394ac68bf45-webhook-cert\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.290692 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/668e90f8-b352-4ad3-8965-1394ac68bf45-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.290747 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns88k\" (UniqueName: \"kubernetes.io/projected/668e90f8-b352-4ad3-8965-1394ac68bf45-kube-api-access-ns88k\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.339043 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s"] Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.391531 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns88k\" (UniqueName: \"kubernetes.io/projected/668e90f8-b352-4ad3-8965-1394ac68bf45-kube-api-access-ns88k\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.391629 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/668e90f8-b352-4ad3-8965-1394ac68bf45-webhook-cert\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.391664 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/668e90f8-b352-4ad3-8965-1394ac68bf45-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.396539 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/668e90f8-b352-4ad3-8965-1394ac68bf45-webhook-cert\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.397020 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/668e90f8-b352-4ad3-8965-1394ac68bf45-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.421067 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns88k\" (UniqueName: \"kubernetes.io/projected/668e90f8-b352-4ad3-8965-1394ac68bf45-kube-api-access-ns88k\") pod \"metallb-operator-webhook-server-7fdb8c798-tkp7s\" (UID: \"668e90f8-b352-4ad3-8965-1394ac68bf45\") " pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.595244 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.749798 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj"] Nov 25 20:38:45 crc kubenswrapper[4983]: I1125 20:38:45.863856 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s"] Nov 25 20:38:45 crc kubenswrapper[4983]: W1125 20:38:45.872055 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668e90f8_b352_4ad3_8965_1394ac68bf45.slice/crio-6658573ee2aa828ce449fe47cd1cba8a2962282f3aa20aa5138c05cf26d8a49c WatchSource:0}: Error finding container 6658573ee2aa828ce449fe47cd1cba8a2962282f3aa20aa5138c05cf26d8a49c: Status 404 returned error can't find the container with id 6658573ee2aa828ce449fe47cd1cba8a2962282f3aa20aa5138c05cf26d8a49c Nov 25 20:38:46 crc kubenswrapper[4983]: I1125 20:38:46.196258 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" event={"ID":"74baeb7c-21f0-4d1c-9a61-7694f59cc161","Type":"ContainerStarted","Data":"e39d26ca9c904cd3e69a44e04bd814c0770213b0aba4afa1cf2cb09fa40bbe78"} Nov 25 20:38:46 crc kubenswrapper[4983]: I1125 20:38:46.198627 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" event={"ID":"668e90f8-b352-4ad3-8965-1394ac68bf45","Type":"ContainerStarted","Data":"6658573ee2aa828ce449fe47cd1cba8a2962282f3aa20aa5138c05cf26d8a49c"} Nov 25 20:38:52 crc kubenswrapper[4983]: I1125 20:38:52.249919 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" event={"ID":"74baeb7c-21f0-4d1c-9a61-7694f59cc161","Type":"ContainerStarted","Data":"f712243812834ca6069eb692c71375f383526ac41298ce33047d603b57260e3a"} Nov 25 20:38:52 crc kubenswrapper[4983]: I1125 20:38:52.251354 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:38:52 crc kubenswrapper[4983]: I1125 20:38:52.268073 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" event={"ID":"668e90f8-b352-4ad3-8965-1394ac68bf45","Type":"ContainerStarted","Data":"f0c705935c78f2dd81985b5dc570aa043f8ac29120fb8c01e9cfaf86981f6bed"} Nov 25 20:38:52 crc kubenswrapper[4983]: I1125 20:38:52.268771 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:38:52 crc kubenswrapper[4983]: I1125 20:38:52.295343 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podStartSLOduration=2.961579845 podStartE2EDuration="8.29529449s" podCreationTimestamp="2025-11-25 20:38:44 +0000 UTC" firstStartedPulling="2025-11-25 20:38:45.792000947 +0000 UTC m=+706.904534339" lastFinishedPulling="2025-11-25 20:38:51.125715592 +0000 UTC m=+712.238248984" observedRunningTime="2025-11-25 20:38:52.292787733 +0000 UTC m=+713.405321125" watchObservedRunningTime="2025-11-25 20:38:52.29529449 +0000 UTC m=+713.407827882" Nov 25 20:38:52 crc kubenswrapper[4983]: I1125 20:38:52.332494 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" podStartSLOduration=2.083376316 podStartE2EDuration="7.332479746s" podCreationTimestamp="2025-11-25 20:38:45 +0000 UTC" firstStartedPulling="2025-11-25 20:38:45.878386009 +0000 UTC m=+706.990919391" lastFinishedPulling="2025-11-25 20:38:51.127489429 +0000 UTC m=+712.240022821" observedRunningTime="2025-11-25 20:38:52.331273004 +0000 UTC m=+713.443806396" watchObservedRunningTime="2025-11-25 20:38:52.332479746 +0000 UTC m=+713.445013138" Nov 25 20:39:05 crc kubenswrapper[4983]: I1125 20:39:05.599399 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7fdb8c798-tkp7s" Nov 25 20:39:25 crc kubenswrapper[4983]: I1125 20:39:25.208501 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.150320 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nkz74"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.152374 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.155358 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-297rv" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.156035 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.156090 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.169816 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.170497 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.172606 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.198606 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.276761 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-q2pnt"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.278132 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.280485 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.280710 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nw95h" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281156 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281300 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-conf\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281346 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jgmc\" (UniqueName: \"kubernetes.io/projected/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-kube-api-access-8jgmc\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281382 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cb9ac50-997a-4361-be38-99a645916356-cert\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281410 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-startup\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281431 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-sockets\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281460 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-reloader\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281482 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fs4z\" (UniqueName: \"kubernetes.io/projected/4cb9ac50-997a-4361-be38-99a645916356-kube-api-access-4fs4z\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.281527 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics-certs\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.284365 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.296347 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-rfq8m"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.297474 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.301183 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.311474 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-rfq8m"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382394 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fs4z\" (UniqueName: \"kubernetes.io/projected/4cb9ac50-997a-4361-be38-99a645916356-kube-api-access-4fs4z\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382453 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d63bc930-d0b8-4b74-924f-def9a4c05193-metrics-certs\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382479 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382507 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b17632af-d63d-48e6-bbc3-e4056a403b94-metallb-excludel2\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382532 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics-certs\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382576 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-metrics-certs\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.382611 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.382864 4983 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.382984 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics-certs podName:94f67f46-ba33-4e52-a4f7-dabfa0e919c8 nodeName:}" failed. No retries permitted until 2025-11-25 20:39:26.882935964 +0000 UTC m=+747.995469346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics-certs") pod "frr-k8s-nkz74" (UID: "94f67f46-ba33-4e52-a4f7-dabfa0e919c8") : secret "frr-k8s-certs-secret" not found Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383024 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-conf\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383067 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383069 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jgmc\" (UniqueName: \"kubernetes.io/projected/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-kube-api-access-8jgmc\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383153 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d63bc930-d0b8-4b74-924f-def9a4c05193-cert\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383206 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cb9ac50-997a-4361-be38-99a645916356-cert\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383263 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-startup\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383271 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-conf\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383332 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-sockets\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.383369 4983 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.383443 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cb9ac50-997a-4361-be38-99a645916356-cert podName:4cb9ac50-997a-4361-be38-99a645916356 nodeName:}" failed. No retries permitted until 2025-11-25 20:39:26.883417377 +0000 UTC m=+747.995950769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4cb9ac50-997a-4361-be38-99a645916356-cert") pod "frr-k8s-webhook-server-6998585d5-6sz7r" (UID: "4cb9ac50-997a-4361-be38-99a645916356") : secret "frr-k8s-webhook-server-cert" not found Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383385 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-reloader\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp2bf\" (UniqueName: \"kubernetes.io/projected/b17632af-d63d-48e6-bbc3-e4056a403b94-kube-api-access-fp2bf\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.383567 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2p4b\" (UniqueName: \"kubernetes.io/projected/d63bc930-d0b8-4b74-924f-def9a4c05193-kube-api-access-t2p4b\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.384031 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-reloader\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.384126 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-startup\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.384204 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-frr-sockets\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.403928 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jgmc\" (UniqueName: \"kubernetes.io/projected/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-kube-api-access-8jgmc\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.403988 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fs4z\" (UniqueName: \"kubernetes.io/projected/4cb9ac50-997a-4361-be38-99a645916356-kube-api-access-4fs4z\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.484997 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp2bf\" (UniqueName: \"kubernetes.io/projected/b17632af-d63d-48e6-bbc3-e4056a403b94-kube-api-access-fp2bf\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.485429 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2p4b\" (UniqueName: \"kubernetes.io/projected/d63bc930-d0b8-4b74-924f-def9a4c05193-kube-api-access-t2p4b\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.485453 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d63bc930-d0b8-4b74-924f-def9a4c05193-metrics-certs\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.485490 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b17632af-d63d-48e6-bbc3-e4056a403b94-metallb-excludel2\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.485532 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-metrics-certs\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.485623 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.485788 4983 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.485791 4983 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.485867 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-metrics-certs podName:b17632af-d63d-48e6-bbc3-e4056a403b94 nodeName:}" failed. No retries permitted until 2025-11-25 20:39:26.985842045 +0000 UTC m=+748.098375437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-metrics-certs") pod "speaker-q2pnt" (UID: "b17632af-d63d-48e6-bbc3-e4056a403b94") : secret "speaker-certs-secret" not found Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.485891 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist podName:b17632af-d63d-48e6-bbc3-e4056a403b94 nodeName:}" failed. No retries permitted until 2025-11-25 20:39:26.985883066 +0000 UTC m=+748.098416458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist") pod "speaker-q2pnt" (UID: "b17632af-d63d-48e6-bbc3-e4056a403b94") : secret "metallb-memberlist" not found Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.485957 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d63bc930-d0b8-4b74-924f-def9a4c05193-cert\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.486974 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b17632af-d63d-48e6-bbc3-e4056a403b94-metallb-excludel2\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.488120 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.490147 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d63bc930-d0b8-4b74-924f-def9a4c05193-metrics-certs\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.503084 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d63bc930-d0b8-4b74-924f-def9a4c05193-cert\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.503682 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2p4b\" (UniqueName: \"kubernetes.io/projected/d63bc930-d0b8-4b74-924f-def9a4c05193-kube-api-access-t2p4b\") pod \"controller-6c7b4b5f48-rfq8m\" (UID: \"d63bc930-d0b8-4b74-924f-def9a4c05193\") " pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.507335 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp2bf\" (UniqueName: \"kubernetes.io/projected/b17632af-d63d-48e6-bbc3-e4056a403b94-kube-api-access-fp2bf\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.609752 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.870828 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-rfq8m"] Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.895776 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cb9ac50-997a-4361-be38-99a645916356-cert\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.895904 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics-certs\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.903508 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cb9ac50-997a-4361-be38-99a645916356-cert\") pod \"frr-k8s-webhook-server-6998585d5-6sz7r\" (UID: \"4cb9ac50-997a-4361-be38-99a645916356\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.904701 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/94f67f46-ba33-4e52-a4f7-dabfa0e919c8-metrics-certs\") pod \"frr-k8s-nkz74\" (UID: \"94f67f46-ba33-4e52-a4f7-dabfa0e919c8\") " pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.997899 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: I1125 20:39:26.998114 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-metrics-certs\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.998278 4983 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 20:39:26 crc kubenswrapper[4983]: E1125 20:39:26.998361 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist podName:b17632af-d63d-48e6-bbc3-e4056a403b94 nodeName:}" failed. No retries permitted until 2025-11-25 20:39:27.998335138 +0000 UTC m=+749.110868530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist") pod "speaker-q2pnt" (UID: "b17632af-d63d-48e6-bbc3-e4056a403b94") : secret "metallb-memberlist" not found Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.004420 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-metrics-certs\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.071252 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.083959 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.354512 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r"] Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.485965 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" event={"ID":"4cb9ac50-997a-4361-be38-99a645916356","Type":"ContainerStarted","Data":"4c2b9ce26e05cd4372e9f2252031d988493af82f11ee57307c7b748d4b13b2db"} Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.488366 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-rfq8m" event={"ID":"d63bc930-d0b8-4b74-924f-def9a4c05193","Type":"ContainerStarted","Data":"108cfb42de2c2d73efac39cd962498f827812a058561f1f85901578f46ccd643"} Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.488397 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-rfq8m" event={"ID":"d63bc930-d0b8-4b74-924f-def9a4c05193","Type":"ContainerStarted","Data":"e013a1a03ad499102581e4d869d734d27bf6c8bafac982605191cb6bed081d29"} Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.488408 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-rfq8m" event={"ID":"d63bc930-d0b8-4b74-924f-def9a4c05193","Type":"ContainerStarted","Data":"8b20dd9bbb48d0c01fba6c1ef001a6af3a183856adfce63ac5a244e7c6230e48"} Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.488610 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:27 crc kubenswrapper[4983]: I1125 20:39:27.490402 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"6965682b3addd4b2ece2d931fc66aa72bd2839c163d1d47226aa10c5b1c880a3"} Nov 25 20:39:28 crc kubenswrapper[4983]: I1125 20:39:28.030688 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:28 crc kubenswrapper[4983]: I1125 20:39:28.038440 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b17632af-d63d-48e6-bbc3-e4056a403b94-memberlist\") pod \"speaker-q2pnt\" (UID: \"b17632af-d63d-48e6-bbc3-e4056a403b94\") " pod="metallb-system/speaker-q2pnt" Nov 25 20:39:28 crc kubenswrapper[4983]: I1125 20:39:28.093703 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q2pnt" Nov 25 20:39:28 crc kubenswrapper[4983]: I1125 20:39:28.502485 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q2pnt" event={"ID":"b17632af-d63d-48e6-bbc3-e4056a403b94","Type":"ContainerStarted","Data":"8e04213bb8a736db8356a9a3b5b1de63e8b70d5931f6545d87c90f5257fd5369"} Nov 25 20:39:28 crc kubenswrapper[4983]: I1125 20:39:28.502525 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q2pnt" event={"ID":"b17632af-d63d-48e6-bbc3-e4056a403b94","Type":"ContainerStarted","Data":"18301d7b426a16d11e567404155649c5315f8cf0eba4f1fdbe0897f32685aa99"} Nov 25 20:39:29 crc kubenswrapper[4983]: I1125 20:39:29.515385 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q2pnt" event={"ID":"b17632af-d63d-48e6-bbc3-e4056a403b94","Type":"ContainerStarted","Data":"adc7216ccfe2e57dee2bc6d786ef43634aea5fa642a95682b957a7fbec89a583"} Nov 25 20:39:29 crc kubenswrapper[4983]: I1125 20:39:29.515958 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-q2pnt" Nov 25 20:39:29 crc kubenswrapper[4983]: I1125 20:39:29.558976 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-rfq8m" podStartSLOduration=3.558941321 podStartE2EDuration="3.558941321s" podCreationTimestamp="2025-11-25 20:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:39:27.52173195 +0000 UTC m=+748.634265372" watchObservedRunningTime="2025-11-25 20:39:29.558941321 +0000 UTC m=+750.671474713" Nov 25 20:39:29 crc kubenswrapper[4983]: I1125 20:39:29.559853 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-q2pnt" podStartSLOduration=3.559847235 podStartE2EDuration="3.559847235s" podCreationTimestamp="2025-11-25 20:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:39:29.557744819 +0000 UTC m=+750.670278211" watchObservedRunningTime="2025-11-25 20:39:29.559847235 +0000 UTC m=+750.672380627" Nov 25 20:39:34 crc kubenswrapper[4983]: I1125 20:39:34.550787 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" event={"ID":"4cb9ac50-997a-4361-be38-99a645916356","Type":"ContainerStarted","Data":"c6d72bf49af6abbe2049032dff9c478a9eea14b6f07e3fe369c7da705e55317f"} Nov 25 20:39:34 crc kubenswrapper[4983]: I1125 20:39:34.554453 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:34 crc kubenswrapper[4983]: I1125 20:39:34.558386 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"58cd14c68bed9299cef9eeb646f3db5a3fba1a3c00ec4a866bfb6eeefb78d656"} Nov 25 20:39:34 crc kubenswrapper[4983]: I1125 20:39:34.582721 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" podStartSLOduration=1.597869469 podStartE2EDuration="8.582700611s" podCreationTimestamp="2025-11-25 20:39:26 +0000 UTC" firstStartedPulling="2025-11-25 20:39:27.360584578 +0000 UTC m=+748.473117970" lastFinishedPulling="2025-11-25 20:39:34.34541573 +0000 UTC m=+755.457949112" observedRunningTime="2025-11-25 20:39:34.575696164 +0000 UTC m=+755.688229556" watchObservedRunningTime="2025-11-25 20:39:34.582700611 +0000 UTC m=+755.695234003" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.086205 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6x4tb"] Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.087132 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" podUID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" containerName="controller-manager" containerID="cri-o://51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96" gracePeriod=30 Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.224582 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq"] Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.224865 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" podUID="5ab6fa75-c38e-4ad3-ad08-f391846e6fac" containerName="route-controller-manager" containerID="cri-o://790b978fdb245e34a176a41b20b8e5d7d57f7894015618621e8c8fec19234439" gracePeriod=30 Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.546390 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.597991 4983 generic.go:334] "Generic (PLEG): container finished" podID="94f67f46-ba33-4e52-a4f7-dabfa0e919c8" containerID="58cd14c68bed9299cef9eeb646f3db5a3fba1a3c00ec4a866bfb6eeefb78d656" exitCode=0 Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.598080 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerDied","Data":"58cd14c68bed9299cef9eeb646f3db5a3fba1a3c00ec4a866bfb6eeefb78d656"} Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.603219 4983 generic.go:334] "Generic (PLEG): container finished" podID="5ab6fa75-c38e-4ad3-ad08-f391846e6fac" containerID="790b978fdb245e34a176a41b20b8e5d7d57f7894015618621e8c8fec19234439" exitCode=0 Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.603303 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" event={"ID":"5ab6fa75-c38e-4ad3-ad08-f391846e6fac","Type":"ContainerDied","Data":"790b978fdb245e34a176a41b20b8e5d7d57f7894015618621e8c8fec19234439"} Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.611491 4983 generic.go:334] "Generic (PLEG): container finished" podID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" containerID="51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96" exitCode=0 Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.611887 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.631741 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" event={"ID":"98915ddf-6a6b-4c4e-a8b5-379567bbbf09","Type":"ContainerDied","Data":"51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96"} Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.631830 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6x4tb" event={"ID":"98915ddf-6a6b-4c4e-a8b5-379567bbbf09","Type":"ContainerDied","Data":"1cda094cb646798437381da0b9458c1836b05eb0bbebf6eade0b1cb066aef936"} Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.631883 4983 scope.go:117] "RemoveContainer" containerID="51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.679689 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs22p\" (UniqueName: \"kubernetes.io/projected/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-kube-api-access-vs22p\") pod \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.679944 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-serving-cert\") pod \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.680037 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-proxy-ca-bundles\") pod \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.680164 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-client-ca\") pod \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.680259 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-config\") pod \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\" (UID: \"98915ddf-6a6b-4c4e-a8b5-379567bbbf09\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.681430 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-config" (OuterVolumeSpecName: "config") pod "98915ddf-6a6b-4c4e-a8b5-379567bbbf09" (UID: "98915ddf-6a6b-4c4e-a8b5-379567bbbf09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.681949 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "98915ddf-6a6b-4c4e-a8b5-379567bbbf09" (UID: "98915ddf-6a6b-4c4e-a8b5-379567bbbf09"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.682179 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-client-ca" (OuterVolumeSpecName: "client-ca") pod "98915ddf-6a6b-4c4e-a8b5-379567bbbf09" (UID: "98915ddf-6a6b-4c4e-a8b5-379567bbbf09"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.690043 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-kube-api-access-vs22p" (OuterVolumeSpecName: "kube-api-access-vs22p") pod "98915ddf-6a6b-4c4e-a8b5-379567bbbf09" (UID: "98915ddf-6a6b-4c4e-a8b5-379567bbbf09"). InnerVolumeSpecName "kube-api-access-vs22p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.690654 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "98915ddf-6a6b-4c4e-a8b5-379567bbbf09" (UID: "98915ddf-6a6b-4c4e-a8b5-379567bbbf09"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.694609 4983 scope.go:117] "RemoveContainer" containerID="51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96" Nov 25 20:39:35 crc kubenswrapper[4983]: E1125 20:39:35.695252 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96\": container with ID starting with 51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96 not found: ID does not exist" containerID="51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.695302 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96"} err="failed to get container status \"51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96\": rpc error: code = NotFound desc = could not find container \"51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96\": container with ID starting with 51d7d3364c7bd0c2400d0fa10df948c54bed62d8d231a1a49e2fe82a3da5ab96 not found: ID does not exist" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.735293 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.783131 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.783187 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs22p\" (UniqueName: \"kubernetes.io/projected/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-kube-api-access-vs22p\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.783201 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.783216 4983 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.783231 4983 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/98915ddf-6a6b-4c4e-a8b5-379567bbbf09-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.884741 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-config\") pod \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.884829 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-serving-cert\") pod \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.884968 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6l9p\" (UniqueName: \"kubernetes.io/projected/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-kube-api-access-f6l9p\") pod \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.884990 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-client-ca\") pod \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\" (UID: \"5ab6fa75-c38e-4ad3-ad08-f391846e6fac\") " Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.886448 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-client-ca" (OuterVolumeSpecName: "client-ca") pod "5ab6fa75-c38e-4ad3-ad08-f391846e6fac" (UID: "5ab6fa75-c38e-4ad3-ad08-f391846e6fac"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.886581 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-config" (OuterVolumeSpecName: "config") pod "5ab6fa75-c38e-4ad3-ad08-f391846e6fac" (UID: "5ab6fa75-c38e-4ad3-ad08-f391846e6fac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.892370 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5ab6fa75-c38e-4ad3-ad08-f391846e6fac" (UID: "5ab6fa75-c38e-4ad3-ad08-f391846e6fac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.892482 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-kube-api-access-f6l9p" (OuterVolumeSpecName: "kube-api-access-f6l9p") pod "5ab6fa75-c38e-4ad3-ad08-f391846e6fac" (UID: "5ab6fa75-c38e-4ad3-ad08-f391846e6fac"). InnerVolumeSpecName "kube-api-access-f6l9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.947227 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6x4tb"] Nov 25 20:39:35 crc kubenswrapper[4983]: I1125 20:39:35.952528 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6x4tb"] Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.014995 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.015438 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.015449 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6l9p\" (UniqueName: \"kubernetes.io/projected/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-kube-api-access-f6l9p\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.015457 4983 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5ab6fa75-c38e-4ad3-ad08-f391846e6fac-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.623284 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" event={"ID":"5ab6fa75-c38e-4ad3-ad08-f391846e6fac","Type":"ContainerDied","Data":"d4296719fec0d6a3f8d13c2e9983d9c114680f05b52d08b240616a80181a4fba"} Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.623306 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.623358 4983 scope.go:117] "RemoveContainer" containerID="790b978fdb245e34a176a41b20b8e5d7d57f7894015618621e8c8fec19234439" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.632404 4983 generic.go:334] "Generic (PLEG): container finished" podID="94f67f46-ba33-4e52-a4f7-dabfa0e919c8" containerID="f4f7a612ea2c92422c886b42a77299f9283b94e607b5c56a348ca3a534f04f2c" exitCode=0 Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.632478 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerDied","Data":"f4f7a612ea2c92422c886b42a77299f9283b94e607b5c56a348ca3a534f04f2c"} Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.669194 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq"] Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.673463 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-l7zvq"] Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.817700 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt"] Nov 25 20:39:36 crc kubenswrapper[4983]: E1125 20:39:36.818231 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab6fa75-c38e-4ad3-ad08-f391846e6fac" containerName="route-controller-manager" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.818263 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab6fa75-c38e-4ad3-ad08-f391846e6fac" containerName="route-controller-manager" Nov 25 20:39:36 crc kubenswrapper[4983]: E1125 20:39:36.818303 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" containerName="controller-manager" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.818315 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" containerName="controller-manager" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.818466 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab6fa75-c38e-4ad3-ad08-f391846e6fac" containerName="route-controller-manager" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.818494 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" containerName="controller-manager" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.819144 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.821151 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl"] Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.821797 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.821824 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.822110 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.823524 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.823688 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.823806 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.828386 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.828601 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.828643 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.828738 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.830583 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.837171 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl"] Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.843620 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.843670 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt"] Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.843881 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.846263 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.897766 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt"] Nov 25 20:39:36 crc kubenswrapper[4983]: E1125 20:39:36.898188 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-hqjmx serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" podUID="6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.929355 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-config\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.929421 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k994\" (UniqueName: \"kubernetes.io/projected/54b8df10-504a-4e91-bb16-180d337013f3-kube-api-access-7k994\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.929457 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-client-ca\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.929491 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b8df10-504a-4e91-bb16-180d337013f3-serving-cert\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.929775 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-proxy-ca-bundles\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.929861 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-serving-cert\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.930004 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-client-ca\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.930071 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-config\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:36 crc kubenswrapper[4983]: I1125 20:39:36.930253 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjmx\" (UniqueName: \"kubernetes.io/projected/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-kube-api-access-hqjmx\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031412 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-serving-cert\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031499 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-client-ca\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031530 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-config\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031574 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjmx\" (UniqueName: \"kubernetes.io/projected/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-kube-api-access-hqjmx\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031611 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-config\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031633 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k994\" (UniqueName: \"kubernetes.io/projected/54b8df10-504a-4e91-bb16-180d337013f3-kube-api-access-7k994\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031658 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-client-ca\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031687 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b8df10-504a-4e91-bb16-180d337013f3-serving-cert\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.031709 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-proxy-ca-bundles\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.032947 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-client-ca\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.033403 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-config\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.033488 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-client-ca\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.033776 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-config\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.034247 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/54b8df10-504a-4e91-bb16-180d337013f3-proxy-ca-bundles\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.039341 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b8df10-504a-4e91-bb16-180d337013f3-serving-cert\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.039721 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-serving-cert\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.051239 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k994\" (UniqueName: \"kubernetes.io/projected/54b8df10-504a-4e91-bb16-180d337013f3-kube-api-access-7k994\") pod \"controller-manager-5c7ffd74f6-2x7kl\" (UID: \"54b8df10-504a-4e91-bb16-180d337013f3\") " pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.051402 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjmx\" (UniqueName: \"kubernetes.io/projected/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-kube-api-access-hqjmx\") pod \"route-controller-manager-596459f6cf-w5vjt\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.162152 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.518789 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl"] Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.614730 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ab6fa75-c38e-4ad3-ad08-f391846e6fac" path="/var/lib/kubelet/pods/5ab6fa75-c38e-4ad3-ad08-f391846e6fac/volumes" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.616188 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98915ddf-6a6b-4c4e-a8b5-379567bbbf09" path="/var/lib/kubelet/pods/98915ddf-6a6b-4c4e-a8b5-379567bbbf09/volumes" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.643642 4983 generic.go:334] "Generic (PLEG): container finished" podID="94f67f46-ba33-4e52-a4f7-dabfa0e919c8" containerID="c0a9f94d13d5113fed7683cef35e8e1e14ebd5794af960e30fff52f29d49f098" exitCode=0 Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.643707 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerDied","Data":"c0a9f94d13d5113fed7683cef35e8e1e14ebd5794af960e30fff52f29d49f098"} Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.648000 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" event={"ID":"54b8df10-504a-4e91-bb16-180d337013f3","Type":"ContainerStarted","Data":"9292dc57fb5f3b73aefae63926e66b801164c0d5af87847efdd4f93c11439e8b"} Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.650378 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.663041 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.844157 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-config\") pod \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.844229 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-client-ca\") pod \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.844364 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-serving-cert\") pod \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.844389 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqjmx\" (UniqueName: \"kubernetes.io/projected/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-kube-api-access-hqjmx\") pod \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\" (UID: \"6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6\") " Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.845263 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-client-ca" (OuterVolumeSpecName: "client-ca") pod "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6" (UID: "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.845625 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-config" (OuterVolumeSpecName: "config") pod "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6" (UID: "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.851482 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-kube-api-access-hqjmx" (OuterVolumeSpecName: "kube-api-access-hqjmx") pod "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6" (UID: "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6"). InnerVolumeSpecName "kube-api-access-hqjmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.851841 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6" (UID: "6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.945923 4983 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.945951 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqjmx\" (UniqueName: \"kubernetes.io/projected/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-kube-api-access-hqjmx\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.945962 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:37 crc kubenswrapper[4983]: I1125 20:39:37.945970 4983 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.099305 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-q2pnt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.660731 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"7cd91095eb79412f0069f32c2a4dad01c64f72c453ba9c644c5d93864eaf61f4"} Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.660784 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"ff832488d40a43338226f96c32dadbf8e331fd4c1948923fa9bc861797ddbf58"} Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.660795 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"dcf9c118766f33e3f352f927ee248ede25527224949750c53803acc817b34d0d"} Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.660805 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"6161cfd5a3d43eeb821fc0cf36eadd18ffb45fb8744d7064326e8983d7173329"} Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.660815 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"388546b2c75a5eb41e0b1f5a83769dd1f561e6384fb74be0bcca70f3686bce81"} Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.662860 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" event={"ID":"54b8df10-504a-4e91-bb16-180d337013f3","Type":"ContainerStarted","Data":"e84fca0a658fb22fe6c5491c28fe44cd6d660356843e7583d91842d5a22a2f95"} Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.662931 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.663257 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.670757 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.687162 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c7ffd74f6-2x7kl" podStartSLOduration=3.68714002 podStartE2EDuration="3.68714002s" podCreationTimestamp="2025-11-25 20:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:39:38.683517743 +0000 UTC m=+759.796051145" watchObservedRunningTime="2025-11-25 20:39:38.68714002 +0000 UTC m=+759.799673412" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.721051 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt"] Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.721818 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.734942 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.735259 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.735317 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.735382 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.742092 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.742660 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt"] Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.742702 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.749130 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-596459f6cf-w5vjt"] Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.759065 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt"] Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.863632 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vnr\" (UniqueName: \"kubernetes.io/projected/d477cef9-f174-41ff-9921-a48f401015f2-kube-api-access-q7vnr\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.863708 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477cef9-f174-41ff-9921-a48f401015f2-client-ca\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.863780 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477cef9-f174-41ff-9921-a48f401015f2-serving-cert\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.863879 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477cef9-f174-41ff-9921-a48f401015f2-config\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.964875 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477cef9-f174-41ff-9921-a48f401015f2-config\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.964984 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vnr\" (UniqueName: \"kubernetes.io/projected/d477cef9-f174-41ff-9921-a48f401015f2-kube-api-access-q7vnr\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.965030 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477cef9-f174-41ff-9921-a48f401015f2-client-ca\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.965086 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477cef9-f174-41ff-9921-a48f401015f2-serving-cert\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.966402 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477cef9-f174-41ff-9921-a48f401015f2-client-ca\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.975507 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477cef9-f174-41ff-9921-a48f401015f2-serving-cert\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.978819 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477cef9-f174-41ff-9921-a48f401015f2-config\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:38 crc kubenswrapper[4983]: I1125 20:39:38.985215 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vnr\" (UniqueName: \"kubernetes.io/projected/d477cef9-f174-41ff-9921-a48f401015f2-kube-api-access-q7vnr\") pod \"route-controller-manager-74cfb9595b-cg8rt\" (UID: \"d477cef9-f174-41ff-9921-a48f401015f2\") " pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.038556 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.376905 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt"] Nov 25 20:39:39 crc kubenswrapper[4983]: W1125 20:39:39.383141 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd477cef9_f174_41ff_9921_a48f401015f2.slice/crio-b014135e2a9350376e3756ca46d775dc5d23883bc44d7ba78a30046734bb2b7e WatchSource:0}: Error finding container b014135e2a9350376e3756ca46d775dc5d23883bc44d7ba78a30046734bb2b7e: Status 404 returned error can't find the container with id b014135e2a9350376e3756ca46d775dc5d23883bc44d7ba78a30046734bb2b7e Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.613439 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6" path="/var/lib/kubelet/pods/6b78a48f-8d1a-4fce-91f6-6e6ba419b0d6/volumes" Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.675731 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nkz74" event={"ID":"94f67f46-ba33-4e52-a4f7-dabfa0e919c8","Type":"ContainerStarted","Data":"c6629a6c24c4ef63308f34953a1aababeface80ab366089f21bc90bded7f5232"} Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.675927 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.677604 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" event={"ID":"d477cef9-f174-41ff-9921-a48f401015f2","Type":"ContainerStarted","Data":"e235e9cbf4c607ee8033a207a4f46daf6c7af76996ca008eeead94a025583fa8"} Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.677651 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" event={"ID":"d477cef9-f174-41ff-9921-a48f401015f2","Type":"ContainerStarted","Data":"b014135e2a9350376e3756ca46d775dc5d23883bc44d7ba78a30046734bb2b7e"} Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.705838 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nkz74" podStartSLOduration=6.585306842 podStartE2EDuration="13.705811037s" podCreationTimestamp="2025-11-25 20:39:26 +0000 UTC" firstStartedPulling="2025-11-25 20:39:27.218257247 +0000 UTC m=+748.330790639" lastFinishedPulling="2025-11-25 20:39:34.338761412 +0000 UTC m=+755.451294834" observedRunningTime="2025-11-25 20:39:39.701082921 +0000 UTC m=+760.813616333" watchObservedRunningTime="2025-11-25 20:39:39.705811037 +0000 UTC m=+760.818344429" Nov 25 20:39:39 crc kubenswrapper[4983]: I1125 20:39:39.726149 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" podStartSLOduration=3.7261240879999997 podStartE2EDuration="3.726124088s" podCreationTimestamp="2025-11-25 20:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:39:39.725385248 +0000 UTC m=+760.837918670" watchObservedRunningTime="2025-11-25 20:39:39.726124088 +0000 UTC m=+760.838657480" Nov 25 20:39:40 crc kubenswrapper[4983]: I1125 20:39:40.689961 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:40 crc kubenswrapper[4983]: I1125 20:39:40.694805 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74cfb9595b-cg8rt" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.262422 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fklgn"] Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.263203 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.268404 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.268464 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.358850 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fklgn"] Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.399661 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6dm\" (UniqueName: \"kubernetes.io/projected/caeeebc5-6878-420d-8402-912fb6478d27-kube-api-access-bh6dm\") pod \"openstack-operator-index-fklgn\" (UID: \"caeeebc5-6878-420d-8402-912fb6478d27\") " pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.501269 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh6dm\" (UniqueName: \"kubernetes.io/projected/caeeebc5-6878-420d-8402-912fb6478d27-kube-api-access-bh6dm\") pod \"openstack-operator-index-fklgn\" (UID: \"caeeebc5-6878-420d-8402-912fb6478d27\") " pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.523927 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh6dm\" (UniqueName: \"kubernetes.io/projected/caeeebc5-6878-420d-8402-912fb6478d27-kube-api-access-bh6dm\") pod \"openstack-operator-index-fklgn\" (UID: \"caeeebc5-6878-420d-8402-912fb6478d27\") " pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.582928 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:41 crc kubenswrapper[4983]: I1125 20:39:41.996310 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fklgn"] Nov 25 20:39:42 crc kubenswrapper[4983]: I1125 20:39:42.071885 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:42 crc kubenswrapper[4983]: I1125 20:39:42.116880 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:42 crc kubenswrapper[4983]: I1125 20:39:42.703230 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fklgn" event={"ID":"caeeebc5-6878-420d-8402-912fb6478d27","Type":"ContainerStarted","Data":"1775e7f07f159653fc73c057d6348a8aa41e4675cd7eb2cfb5e3984ce920ba2d"} Nov 25 20:39:44 crc kubenswrapper[4983]: I1125 20:39:44.406723 4983 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 20:39:44 crc kubenswrapper[4983]: I1125 20:39:44.627130 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fklgn"] Nov 25 20:39:44 crc kubenswrapper[4983]: I1125 20:39:44.720127 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fklgn" event={"ID":"caeeebc5-6878-420d-8402-912fb6478d27","Type":"ContainerStarted","Data":"e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7"} Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.233868 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fklgn" podStartSLOduration=1.8072323959999999 podStartE2EDuration="4.233850081s" podCreationTimestamp="2025-11-25 20:39:41 +0000 UTC" firstStartedPulling="2025-11-25 20:39:42.011004465 +0000 UTC m=+763.123537847" lastFinishedPulling="2025-11-25 20:39:44.43762212 +0000 UTC m=+765.550155532" observedRunningTime="2025-11-25 20:39:44.737306573 +0000 UTC m=+765.849839985" watchObservedRunningTime="2025-11-25 20:39:45.233850081 +0000 UTC m=+766.346383473" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.234171 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-stqz2"] Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.235037 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.240308 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ldxd7" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.244107 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-stqz2"] Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.356714 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cplxx\" (UniqueName: \"kubernetes.io/projected/2a6b637c-d929-42fa-89c6-8e5af3746cc1-kube-api-access-cplxx\") pod \"openstack-operator-index-stqz2\" (UID: \"2a6b637c-d929-42fa-89c6-8e5af3746cc1\") " pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.457671 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cplxx\" (UniqueName: \"kubernetes.io/projected/2a6b637c-d929-42fa-89c6-8e5af3746cc1-kube-api-access-cplxx\") pod \"openstack-operator-index-stqz2\" (UID: \"2a6b637c-d929-42fa-89c6-8e5af3746cc1\") " pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.475057 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cplxx\" (UniqueName: \"kubernetes.io/projected/2a6b637c-d929-42fa-89c6-8e5af3746cc1-kube-api-access-cplxx\") pod \"openstack-operator-index-stqz2\" (UID: \"2a6b637c-d929-42fa-89c6-8e5af3746cc1\") " pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.548287 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:45 crc kubenswrapper[4983]: I1125 20:39:45.725874 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fklgn" podUID="caeeebc5-6878-420d-8402-912fb6478d27" containerName="registry-server" containerID="cri-o://e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7" gracePeriod=2 Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.013425 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-stqz2"] Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.188823 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.272145 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh6dm\" (UniqueName: \"kubernetes.io/projected/caeeebc5-6878-420d-8402-912fb6478d27-kube-api-access-bh6dm\") pod \"caeeebc5-6878-420d-8402-912fb6478d27\" (UID: \"caeeebc5-6878-420d-8402-912fb6478d27\") " Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.280033 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caeeebc5-6878-420d-8402-912fb6478d27-kube-api-access-bh6dm" (OuterVolumeSpecName: "kube-api-access-bh6dm") pod "caeeebc5-6878-420d-8402-912fb6478d27" (UID: "caeeebc5-6878-420d-8402-912fb6478d27"). InnerVolumeSpecName "kube-api-access-bh6dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.373987 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh6dm\" (UniqueName: \"kubernetes.io/projected/caeeebc5-6878-420d-8402-912fb6478d27-kube-api-access-bh6dm\") on node \"crc\" DevicePath \"\"" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.615446 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-rfq8m" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.733873 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-stqz2" event={"ID":"2a6b637c-d929-42fa-89c6-8e5af3746cc1","Type":"ContainerStarted","Data":"b475ef5fe544dd9516ddb0e223fc1775b0881934a6a4eeed2a033f774a7cc6e9"} Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.734429 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-stqz2" event={"ID":"2a6b637c-d929-42fa-89c6-8e5af3746cc1","Type":"ContainerStarted","Data":"7dc56e7e3bc35bc51680e87cba4e7aee54f4c9910c196c84ea491dbc32241ad9"} Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.735756 4983 generic.go:334] "Generic (PLEG): container finished" podID="caeeebc5-6878-420d-8402-912fb6478d27" containerID="e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7" exitCode=0 Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.735783 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fklgn" event={"ID":"caeeebc5-6878-420d-8402-912fb6478d27","Type":"ContainerDied","Data":"e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7"} Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.735799 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fklgn" event={"ID":"caeeebc5-6878-420d-8402-912fb6478d27","Type":"ContainerDied","Data":"1775e7f07f159653fc73c057d6348a8aa41e4675cd7eb2cfb5e3984ce920ba2d"} Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.735817 4983 scope.go:117] "RemoveContainer" containerID="e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.735819 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fklgn" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.757738 4983 scope.go:117] "RemoveContainer" containerID="e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7" Nov 25 20:39:46 crc kubenswrapper[4983]: E1125 20:39:46.761182 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7\": container with ID starting with e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7 not found: ID does not exist" containerID="e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.761247 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7"} err="failed to get container status \"e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7\": rpc error: code = NotFound desc = could not find container \"e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7\": container with ID starting with e5d201798252d4e5b0e2c9bf7b76a4f474c2aa6d07d946afd05ae57371ae9ae7 not found: ID does not exist" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.771823 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-stqz2" podStartSLOduration=1.716571029 podStartE2EDuration="1.77179279s" podCreationTimestamp="2025-11-25 20:39:45 +0000 UTC" firstStartedPulling="2025-11-25 20:39:46.034075748 +0000 UTC m=+767.146609150" lastFinishedPulling="2025-11-25 20:39:46.089297509 +0000 UTC m=+767.201830911" observedRunningTime="2025-11-25 20:39:46.758585079 +0000 UTC m=+767.871118481" watchObservedRunningTime="2025-11-25 20:39:46.77179279 +0000 UTC m=+767.884326192" Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.784482 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fklgn"] Nov 25 20:39:46 crc kubenswrapper[4983]: I1125 20:39:46.790974 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fklgn"] Nov 25 20:39:47 crc kubenswrapper[4983]: I1125 20:39:47.078083 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nkz74" Nov 25 20:39:47 crc kubenswrapper[4983]: I1125 20:39:47.093264 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-6sz7r" Nov 25 20:39:47 crc kubenswrapper[4983]: I1125 20:39:47.616880 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caeeebc5-6878-420d-8402-912fb6478d27" path="/var/lib/kubelet/pods/caeeebc5-6878-420d-8402-912fb6478d27/volumes" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.434799 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lrjqw"] Nov 25 20:39:52 crc kubenswrapper[4983]: E1125 20:39:52.435625 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caeeebc5-6878-420d-8402-912fb6478d27" containerName="registry-server" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.435640 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="caeeebc5-6878-420d-8402-912fb6478d27" containerName="registry-server" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.435788 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="caeeebc5-6878-420d-8402-912fb6478d27" containerName="registry-server" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.436634 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.446376 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrjqw"] Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.477684 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28tld\" (UniqueName: \"kubernetes.io/projected/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-kube-api-access-28tld\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.477750 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-catalog-content\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.477830 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-utilities\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.579283 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28tld\" (UniqueName: \"kubernetes.io/projected/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-kube-api-access-28tld\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.579348 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-catalog-content\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.579378 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-utilities\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.580017 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-catalog-content\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.580097 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-utilities\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.603350 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28tld\" (UniqueName: \"kubernetes.io/projected/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-kube-api-access-28tld\") pod \"community-operators-lrjqw\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:52 crc kubenswrapper[4983]: I1125 20:39:52.770413 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:39:53 crc kubenswrapper[4983]: I1125 20:39:53.579742 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrjqw"] Nov 25 20:39:53 crc kubenswrapper[4983]: W1125 20:39:53.587885 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ccb53a9_070f_4933_9c3c_77dff09bf0ae.slice/crio-bc847e9e649acd7577f213eaadbb23c17b296729bf1c75f45eabe15bdb6ba893 WatchSource:0}: Error finding container bc847e9e649acd7577f213eaadbb23c17b296729bf1c75f45eabe15bdb6ba893: Status 404 returned error can't find the container with id bc847e9e649acd7577f213eaadbb23c17b296729bf1c75f45eabe15bdb6ba893 Nov 25 20:39:53 crc kubenswrapper[4983]: I1125 20:39:53.787906 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjqw" event={"ID":"1ccb53a9-070f-4933-9c3c-77dff09bf0ae","Type":"ContainerStarted","Data":"bc847e9e649acd7577f213eaadbb23c17b296729bf1c75f45eabe15bdb6ba893"} Nov 25 20:39:54 crc kubenswrapper[4983]: I1125 20:39:54.798462 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerID="39d9f7a78ad763a4822cefdcb8bcf2aff4a1de97d20089ad73f29ef182ea0fc6" exitCode=0 Nov 25 20:39:54 crc kubenswrapper[4983]: I1125 20:39:54.798577 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjqw" event={"ID":"1ccb53a9-070f-4933-9c3c-77dff09bf0ae","Type":"ContainerDied","Data":"39d9f7a78ad763a4822cefdcb8bcf2aff4a1de97d20089ad73f29ef182ea0fc6"} Nov 25 20:39:55 crc kubenswrapper[4983]: I1125 20:39:55.548950 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:55 crc kubenswrapper[4983]: I1125 20:39:55.548995 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:55 crc kubenswrapper[4983]: I1125 20:39:55.574264 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:55 crc kubenswrapper[4983]: I1125 20:39:55.808719 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerID="b66c87961fc43c69eace6233a776a5048c2f8c284a37e4753cce35cb8b1a8e4c" exitCode=0 Nov 25 20:39:55 crc kubenswrapper[4983]: I1125 20:39:55.808806 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjqw" event={"ID":"1ccb53a9-070f-4933-9c3c-77dff09bf0ae","Type":"ContainerDied","Data":"b66c87961fc43c69eace6233a776a5048c2f8c284a37e4753cce35cb8b1a8e4c"} Nov 25 20:39:55 crc kubenswrapper[4983]: I1125 20:39:55.850609 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-stqz2" Nov 25 20:39:56 crc kubenswrapper[4983]: I1125 20:39:56.824137 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjqw" event={"ID":"1ccb53a9-070f-4933-9c3c-77dff09bf0ae","Type":"ContainerStarted","Data":"bf9e6c236e3feb10e0cb9995a2a57d83a9743eca63ec71ba6a6160510667daa6"} Nov 25 20:39:56 crc kubenswrapper[4983]: I1125 20:39:56.841979 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lrjqw" podStartSLOduration=3.407051338 podStartE2EDuration="4.841961992s" podCreationTimestamp="2025-11-25 20:39:52 +0000 UTC" firstStartedPulling="2025-11-25 20:39:54.800362256 +0000 UTC m=+775.912895688" lastFinishedPulling="2025-11-25 20:39:56.23527295 +0000 UTC m=+777.347806342" observedRunningTime="2025-11-25 20:39:56.839785784 +0000 UTC m=+777.952319176" watchObservedRunningTime="2025-11-25 20:39:56.841961992 +0000 UTC m=+777.954495384" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.283241 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp"] Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.285795 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.288542 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zb7bk" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.294562 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp"] Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.399322 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snh4r\" (UniqueName: \"kubernetes.io/projected/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-kube-api-access-snh4r\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.399502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-util\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.399611 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-bundle\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.501217 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-bundle\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.501343 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snh4r\" (UniqueName: \"kubernetes.io/projected/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-kube-api-access-snh4r\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.501375 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-util\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.501920 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-util\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.501920 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-bundle\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.527662 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snh4r\" (UniqueName: \"kubernetes.io/projected/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-kube-api-access-snh4r\") pod \"5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.612859 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.775195 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.775672 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.839816 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:40:02 crc kubenswrapper[4983]: I1125 20:40:02.912057 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:40:03 crc kubenswrapper[4983]: I1125 20:40:03.138085 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp"] Nov 25 20:40:03 crc kubenswrapper[4983]: W1125 20:40:03.140791 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5df8fe9d_7ee2_4f34_a56d_d7baaa1e4183.slice/crio-c5d0792869377554713bc75d6279c499c23d7f23478044b587eff2715c51dbdb WatchSource:0}: Error finding container c5d0792869377554713bc75d6279c499c23d7f23478044b587eff2715c51dbdb: Status 404 returned error can't find the container with id c5d0792869377554713bc75d6279c499c23d7f23478044b587eff2715c51dbdb Nov 25 20:40:03 crc kubenswrapper[4983]: I1125 20:40:03.871349 4983 generic.go:334] "Generic (PLEG): container finished" podID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerID="7d71faabdd193a0afe67a7f5122e291ae3bcb0f2ecacbc45fbe064b2f76a1d2f" exitCode=0 Nov 25 20:40:03 crc kubenswrapper[4983]: I1125 20:40:03.871411 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" event={"ID":"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183","Type":"ContainerDied","Data":"7d71faabdd193a0afe67a7f5122e291ae3bcb0f2ecacbc45fbe064b2f76a1d2f"} Nov 25 20:40:03 crc kubenswrapper[4983]: I1125 20:40:03.871883 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" event={"ID":"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183","Type":"ContainerStarted","Data":"c5d0792869377554713bc75d6279c499c23d7f23478044b587eff2715c51dbdb"} Nov 25 20:40:04 crc kubenswrapper[4983]: I1125 20:40:04.883112 4983 generic.go:334] "Generic (PLEG): container finished" podID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerID="9e479c9fb68066c88674ababaae3f086e3b0fc2518f60c2cc61033d5109206d3" exitCode=0 Nov 25 20:40:04 crc kubenswrapper[4983]: I1125 20:40:04.883200 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" event={"ID":"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183","Type":"ContainerDied","Data":"9e479c9fb68066c88674ababaae3f086e3b0fc2518f60c2cc61033d5109206d3"} Nov 25 20:40:05 crc kubenswrapper[4983]: I1125 20:40:05.839127 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrjqw"] Nov 25 20:40:05 crc kubenswrapper[4983]: I1125 20:40:05.841125 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lrjqw" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="registry-server" containerID="cri-o://bf9e6c236e3feb10e0cb9995a2a57d83a9743eca63ec71ba6a6160510667daa6" gracePeriod=2 Nov 25 20:40:05 crc kubenswrapper[4983]: I1125 20:40:05.893191 4983 generic.go:334] "Generic (PLEG): container finished" podID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerID="533c3bea3ed91b65d1af7cd0f22b35782baa176e4ce72360bc3c1225d9e601cf" exitCode=0 Nov 25 20:40:05 crc kubenswrapper[4983]: I1125 20:40:05.893239 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" event={"ID":"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183","Type":"ContainerDied","Data":"533c3bea3ed91b65d1af7cd0f22b35782baa176e4ce72360bc3c1225d9e601cf"} Nov 25 20:40:06 crc kubenswrapper[4983]: I1125 20:40:06.906061 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerID="bf9e6c236e3feb10e0cb9995a2a57d83a9743eca63ec71ba6a6160510667daa6" exitCode=0 Nov 25 20:40:06 crc kubenswrapper[4983]: I1125 20:40:06.906149 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjqw" event={"ID":"1ccb53a9-070f-4933-9c3c-77dff09bf0ae","Type":"ContainerDied","Data":"bf9e6c236e3feb10e0cb9995a2a57d83a9743eca63ec71ba6a6160510667daa6"} Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.075667 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.079809 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28tld\" (UniqueName: \"kubernetes.io/projected/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-kube-api-access-28tld\") pod \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.079917 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-catalog-content\") pod \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.079976 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-utilities\") pod \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\" (UID: \"1ccb53a9-070f-4933-9c3c-77dff09bf0ae\") " Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.081151 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-utilities" (OuterVolumeSpecName: "utilities") pod "1ccb53a9-070f-4933-9c3c-77dff09bf0ae" (UID: "1ccb53a9-070f-4933-9c3c-77dff09bf0ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.092842 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-kube-api-access-28tld" (OuterVolumeSpecName: "kube-api-access-28tld") pod "1ccb53a9-070f-4933-9c3c-77dff09bf0ae" (UID: "1ccb53a9-070f-4933-9c3c-77dff09bf0ae"). InnerVolumeSpecName "kube-api-access-28tld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.136904 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ccb53a9-070f-4933-9c3c-77dff09bf0ae" (UID: "1ccb53a9-070f-4933-9c3c-77dff09bf0ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.180720 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.180754 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.180767 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28tld\" (UniqueName: \"kubernetes.io/projected/1ccb53a9-070f-4933-9c3c-77dff09bf0ae-kube-api-access-28tld\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.287597 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.381802 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snh4r\" (UniqueName: \"kubernetes.io/projected/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-kube-api-access-snh4r\") pod \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.381842 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-util\") pod \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.381897 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-bundle\") pod \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\" (UID: \"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183\") " Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.382730 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-bundle" (OuterVolumeSpecName: "bundle") pod "5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" (UID: "5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.389734 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-kube-api-access-snh4r" (OuterVolumeSpecName: "kube-api-access-snh4r") pod "5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" (UID: "5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183"). InnerVolumeSpecName "kube-api-access-snh4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.395022 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-util" (OuterVolumeSpecName: "util") pod "5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" (UID: "5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.483521 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snh4r\" (UniqueName: \"kubernetes.io/projected/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-kube-api-access-snh4r\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.483588 4983 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-util\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.483609 4983 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.915683 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjqw" event={"ID":"1ccb53a9-070f-4933-9c3c-77dff09bf0ae","Type":"ContainerDied","Data":"bc847e9e649acd7577f213eaadbb23c17b296729bf1c75f45eabe15bdb6ba893"} Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.915703 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjqw" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.915816 4983 scope.go:117] "RemoveContainer" containerID="bf9e6c236e3feb10e0cb9995a2a57d83a9743eca63ec71ba6a6160510667daa6" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.921429 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" event={"ID":"5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183","Type":"ContainerDied","Data":"c5d0792869377554713bc75d6279c499c23d7f23478044b587eff2715c51dbdb"} Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.921475 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d0792869377554713bc75d6279c499c23d7f23478044b587eff2715c51dbdb" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.921623 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.946541 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrjqw"] Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.947404 4983 scope.go:117] "RemoveContainer" containerID="b66c87961fc43c69eace6233a776a5048c2f8c284a37e4753cce35cb8b1a8e4c" Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.953151 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lrjqw"] Nov 25 20:40:07 crc kubenswrapper[4983]: I1125 20:40:07.969996 4983 scope.go:117] "RemoveContainer" containerID="39d9f7a78ad763a4822cefdcb8bcf2aff4a1de97d20089ad73f29ef182ea0fc6" Nov 25 20:40:09 crc kubenswrapper[4983]: I1125 20:40:09.623496 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" path="/var/lib/kubelet/pods/1ccb53a9-070f-4933-9c3c-77dff09bf0ae/volumes" Nov 25 20:40:09 crc kubenswrapper[4983]: I1125 20:40:09.928061 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:40:09 crc kubenswrapper[4983]: I1125 20:40:09.928177 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434219 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th"] Nov 25 20:40:12 crc kubenswrapper[4983]: E1125 20:40:12.434438 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="pull" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434450 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="pull" Nov 25 20:40:12 crc kubenswrapper[4983]: E1125 20:40:12.434468 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="util" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434474 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="util" Nov 25 20:40:12 crc kubenswrapper[4983]: E1125 20:40:12.434481 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="extract" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434487 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="extract" Nov 25 20:40:12 crc kubenswrapper[4983]: E1125 20:40:12.434511 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="registry-server" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434517 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="registry-server" Nov 25 20:40:12 crc kubenswrapper[4983]: E1125 20:40:12.434528 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="extract-utilities" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434534 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="extract-utilities" Nov 25 20:40:12 crc kubenswrapper[4983]: E1125 20:40:12.434542 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="extract-content" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434563 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="extract-content" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434661 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ccb53a9-070f-4933-9c3c-77dff09bf0ae" containerName="registry-server" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.434670 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183" containerName="extract" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.435047 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.437292 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jvhsb" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.455396 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hffr\" (UniqueName: \"kubernetes.io/projected/668ad5ef-ec7f-4239-94c5-8bb868f653ce-kube-api-access-6hffr\") pod \"openstack-operator-controller-operator-6b8dd87645-g89th\" (UID: \"668ad5ef-ec7f-4239-94c5-8bb868f653ce\") " pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.463124 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th"] Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.557032 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hffr\" (UniqueName: \"kubernetes.io/projected/668ad5ef-ec7f-4239-94c5-8bb868f653ce-kube-api-access-6hffr\") pod \"openstack-operator-controller-operator-6b8dd87645-g89th\" (UID: \"668ad5ef-ec7f-4239-94c5-8bb868f653ce\") " pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.587825 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hffr\" (UniqueName: \"kubernetes.io/projected/668ad5ef-ec7f-4239-94c5-8bb868f653ce-kube-api-access-6hffr\") pod \"openstack-operator-controller-operator-6b8dd87645-g89th\" (UID: \"668ad5ef-ec7f-4239-94c5-8bb868f653ce\") " pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:12 crc kubenswrapper[4983]: I1125 20:40:12.751697 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:13 crc kubenswrapper[4983]: I1125 20:40:13.244516 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th"] Nov 25 20:40:13 crc kubenswrapper[4983]: I1125 20:40:13.978133 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" event={"ID":"668ad5ef-ec7f-4239-94c5-8bb868f653ce","Type":"ContainerStarted","Data":"abecbbfe153fb9e0ccad8c411d7bd63e389c9557b49f31ba0ddbaa110b944899"} Nov 25 20:40:18 crc kubenswrapper[4983]: I1125 20:40:18.018420 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" event={"ID":"668ad5ef-ec7f-4239-94c5-8bb868f653ce","Type":"ContainerStarted","Data":"03f2d6cb3de1e454a3267d4f5a089b8e764520b2333c623bbfd84cac9ff88394"} Nov 25 20:40:18 crc kubenswrapper[4983]: I1125 20:40:18.020921 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:18 crc kubenswrapper[4983]: I1125 20:40:18.071077 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" podStartSLOduration=1.99239678 podStartE2EDuration="6.071039912s" podCreationTimestamp="2025-11-25 20:40:12 +0000 UTC" firstStartedPulling="2025-11-25 20:40:13.257453552 +0000 UTC m=+794.369986964" lastFinishedPulling="2025-11-25 20:40:17.336096684 +0000 UTC m=+798.448630096" observedRunningTime="2025-11-25 20:40:18.068428702 +0000 UTC m=+799.180962154" watchObservedRunningTime="2025-11-25 20:40:18.071039912 +0000 UTC m=+799.183573364" Nov 25 20:40:22 crc kubenswrapper[4983]: I1125 20:40:22.756472 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.577981 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xwgt6"] Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.582611 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.586254 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwgt6"] Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.660170 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-catalog-content\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.660220 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ec52086d-bc5a-48eb-8d51-d4330f757a18-kube-api-access-fmmsx\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.660253 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-utilities\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.761253 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-catalog-content\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.761325 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ec52086d-bc5a-48eb-8d51-d4330f757a18-kube-api-access-fmmsx\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.761371 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-utilities\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.762079 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-utilities\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.762313 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-catalog-content\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.786633 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ec52086d-bc5a-48eb-8d51-d4330f757a18-kube-api-access-fmmsx\") pod \"redhat-marketplace-xwgt6\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:26 crc kubenswrapper[4983]: I1125 20:40:26.958213 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:27 crc kubenswrapper[4983]: I1125 20:40:27.460867 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwgt6"] Nov 25 20:40:28 crc kubenswrapper[4983]: I1125 20:40:28.138109 4983 generic.go:334] "Generic (PLEG): container finished" podID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerID="3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4" exitCode=0 Nov 25 20:40:28 crc kubenswrapper[4983]: I1125 20:40:28.138173 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwgt6" event={"ID":"ec52086d-bc5a-48eb-8d51-d4330f757a18","Type":"ContainerDied","Data":"3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4"} Nov 25 20:40:28 crc kubenswrapper[4983]: I1125 20:40:28.138611 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwgt6" event={"ID":"ec52086d-bc5a-48eb-8d51-d4330f757a18","Type":"ContainerStarted","Data":"2feec25f2942752f35354398ced959605643d2cc53c4be8c9e17b46410737dd3"} Nov 25 20:40:29 crc kubenswrapper[4983]: I1125 20:40:29.145672 4983 generic.go:334] "Generic (PLEG): container finished" podID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerID="3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3" exitCode=0 Nov 25 20:40:29 crc kubenswrapper[4983]: I1125 20:40:29.145758 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwgt6" event={"ID":"ec52086d-bc5a-48eb-8d51-d4330f757a18","Type":"ContainerDied","Data":"3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3"} Nov 25 20:40:30 crc kubenswrapper[4983]: I1125 20:40:30.154240 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwgt6" event={"ID":"ec52086d-bc5a-48eb-8d51-d4330f757a18","Type":"ContainerStarted","Data":"1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15"} Nov 25 20:40:30 crc kubenswrapper[4983]: I1125 20:40:30.201390 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xwgt6" podStartSLOduration=2.812263733 podStartE2EDuration="4.201373138s" podCreationTimestamp="2025-11-25 20:40:26 +0000 UTC" firstStartedPulling="2025-11-25 20:40:28.141773591 +0000 UTC m=+809.254307023" lastFinishedPulling="2025-11-25 20:40:29.530883036 +0000 UTC m=+810.643416428" observedRunningTime="2025-11-25 20:40:30.198280685 +0000 UTC m=+811.310814087" watchObservedRunningTime="2025-11-25 20:40:30.201373138 +0000 UTC m=+811.313906530" Nov 25 20:40:36 crc kubenswrapper[4983]: I1125 20:40:36.959456 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:36 crc kubenswrapper[4983]: I1125 20:40:36.960704 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:37 crc kubenswrapper[4983]: I1125 20:40:37.038835 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:37 crc kubenswrapper[4983]: I1125 20:40:37.278160 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:37 crc kubenswrapper[4983]: I1125 20:40:37.341684 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwgt6"] Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.217617 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xwgt6" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="registry-server" containerID="cri-o://1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15" gracePeriod=2 Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.638993 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.771773 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-catalog-content\") pod \"ec52086d-bc5a-48eb-8d51-d4330f757a18\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.771822 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-utilities\") pod \"ec52086d-bc5a-48eb-8d51-d4330f757a18\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.771862 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ec52086d-bc5a-48eb-8d51-d4330f757a18-kube-api-access-fmmsx\") pod \"ec52086d-bc5a-48eb-8d51-d4330f757a18\" (UID: \"ec52086d-bc5a-48eb-8d51-d4330f757a18\") " Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.781271 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-utilities" (OuterVolumeSpecName: "utilities") pod "ec52086d-bc5a-48eb-8d51-d4330f757a18" (UID: "ec52086d-bc5a-48eb-8d51-d4330f757a18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.822459 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec52086d-bc5a-48eb-8d51-d4330f757a18" (UID: "ec52086d-bc5a-48eb-8d51-d4330f757a18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.824808 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec52086d-bc5a-48eb-8d51-d4330f757a18-kube-api-access-fmmsx" (OuterVolumeSpecName: "kube-api-access-fmmsx") pod "ec52086d-bc5a-48eb-8d51-d4330f757a18" (UID: "ec52086d-bc5a-48eb-8d51-d4330f757a18"). InnerVolumeSpecName "kube-api-access-fmmsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.873890 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.873940 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec52086d-bc5a-48eb-8d51-d4330f757a18-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.873952 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmmsx\" (UniqueName: \"kubernetes.io/projected/ec52086d-bc5a-48eb-8d51-d4330f757a18-kube-api-access-fmmsx\") on node \"crc\" DevicePath \"\"" Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.927970 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:40:39 crc kubenswrapper[4983]: I1125 20:40:39.928066 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.229041 4983 generic.go:334] "Generic (PLEG): container finished" podID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerID="1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15" exitCode=0 Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.229102 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwgt6" event={"ID":"ec52086d-bc5a-48eb-8d51-d4330f757a18","Type":"ContainerDied","Data":"1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15"} Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.229148 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xwgt6" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.229197 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xwgt6" event={"ID":"ec52086d-bc5a-48eb-8d51-d4330f757a18","Type":"ContainerDied","Data":"2feec25f2942752f35354398ced959605643d2cc53c4be8c9e17b46410737dd3"} Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.229226 4983 scope.go:117] "RemoveContainer" containerID="1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.255809 4983 scope.go:117] "RemoveContainer" containerID="3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.268498 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwgt6"] Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.275339 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xwgt6"] Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.281710 4983 scope.go:117] "RemoveContainer" containerID="3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.303650 4983 scope.go:117] "RemoveContainer" containerID="1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15" Nov 25 20:40:40 crc kubenswrapper[4983]: E1125 20:40:40.304313 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15\": container with ID starting with 1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15 not found: ID does not exist" containerID="1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.304361 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15"} err="failed to get container status \"1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15\": rpc error: code = NotFound desc = could not find container \"1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15\": container with ID starting with 1a4de0aec4f3b84579c1e980e508f00a734883ee60f24f60993e11a4d5c5bc15 not found: ID does not exist" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.304392 4983 scope.go:117] "RemoveContainer" containerID="3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3" Nov 25 20:40:40 crc kubenswrapper[4983]: E1125 20:40:40.305088 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3\": container with ID starting with 3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3 not found: ID does not exist" containerID="3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.305139 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3"} err="failed to get container status \"3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3\": rpc error: code = NotFound desc = could not find container \"3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3\": container with ID starting with 3cbd4d28477c7e5531b98812bcf33b93c573cbbf5291ed15388a056e3bd0e5c3 not found: ID does not exist" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.305171 4983 scope.go:117] "RemoveContainer" containerID="3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4" Nov 25 20:40:40 crc kubenswrapper[4983]: E1125 20:40:40.305523 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4\": container with ID starting with 3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4 not found: ID does not exist" containerID="3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4" Nov 25 20:40:40 crc kubenswrapper[4983]: I1125 20:40:40.305562 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4"} err="failed to get container status \"3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4\": rpc error: code = NotFound desc = could not find container \"3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4\": container with ID starting with 3df1bfa02c0c32481c26d0e5747d8e6833e84394bda122e9c72cc4d7368774d4 not found: ID does not exist" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.318374 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq"] Nov 25 20:40:41 crc kubenswrapper[4983]: E1125 20:40:41.319048 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="extract-utilities" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.319063 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="extract-utilities" Nov 25 20:40:41 crc kubenswrapper[4983]: E1125 20:40:41.319072 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="extract-content" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.319079 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="extract-content" Nov 25 20:40:41 crc kubenswrapper[4983]: E1125 20:40:41.319098 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="registry-server" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.319105 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="registry-server" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.319210 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" containerName="registry-server" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.319856 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.324679 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6gbp8" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.332245 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.337419 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.338630 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.344673 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gkvqg" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.348898 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-lzn84"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.350198 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.353529 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r8jjl" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.359332 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.367884 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-lzn84"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.393361 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgzzq\" (UniqueName: \"kubernetes.io/projected/cf765330-a0f9-4603-a92b-4aec8feaeafb-kube-api-access-bgzzq\") pod \"cinder-operator-controller-manager-6b7f75547b-b9lnt\" (UID: \"cf765330-a0f9-4603-a92b-4aec8feaeafb\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.393442 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6r4\" (UniqueName: \"kubernetes.io/projected/1ec6aefb-824e-4248-ac00-c1d0b526edc6-kube-api-access-br6r4\") pod \"barbican-operator-controller-manager-7b64f4fb85-nf6tq\" (UID: \"1ec6aefb-824e-4248-ac00-c1d0b526edc6\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.397088 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.398233 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.402457 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hhcsg" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.408593 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.430624 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.431721 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.435068 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mt82l" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.456991 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.458239 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.465778 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-n5qk7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.482937 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.495142 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br6r4\" (UniqueName: \"kubernetes.io/projected/1ec6aefb-824e-4248-ac00-c1d0b526edc6-kube-api-access-br6r4\") pod \"barbican-operator-controller-manager-7b64f4fb85-nf6tq\" (UID: \"1ec6aefb-824e-4248-ac00-c1d0b526edc6\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.495236 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqzls\" (UniqueName: \"kubernetes.io/projected/00a7db78-81a7-481d-a20e-135c60e139e3-kube-api-access-bqzls\") pod \"designate-operator-controller-manager-955677c94-lzn84\" (UID: \"00a7db78-81a7-481d-a20e-135c60e139e3\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.495286 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtkkj\" (UniqueName: \"kubernetes.io/projected/da827172-6e3a-42a7-814c-cdfcc18d48d6-kube-api-access-mtkkj\") pod \"glance-operator-controller-manager-589cbd6b5b-xvxp7\" (UID: \"da827172-6e3a-42a7-814c-cdfcc18d48d6\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.495327 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgzzq\" (UniqueName: \"kubernetes.io/projected/cf765330-a0f9-4603-a92b-4aec8feaeafb-kube-api-access-bgzzq\") pod \"cinder-operator-controller-manager-6b7f75547b-b9lnt\" (UID: \"cf765330-a0f9-4603-a92b-4aec8feaeafb\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.495365 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrtf\" (UniqueName: \"kubernetes.io/projected/48b3567f-5b1a-4f14-891c-775c05e2d768-kube-api-access-9nrtf\") pod \"heat-operator-controller-manager-5b77f656f-t5knb\" (UID: \"48b3567f-5b1a-4f14-891c-775c05e2d768\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.522312 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.522897 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgzzq\" (UniqueName: \"kubernetes.io/projected/cf765330-a0f9-4603-a92b-4aec8feaeafb-kube-api-access-bgzzq\") pod \"cinder-operator-controller-manager-6b7f75547b-b9lnt\" (UID: \"cf765330-a0f9-4603-a92b-4aec8feaeafb\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.523503 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br6r4\" (UniqueName: \"kubernetes.io/projected/1ec6aefb-824e-4248-ac00-c1d0b526edc6-kube-api-access-br6r4\") pod \"barbican-operator-controller-manager-7b64f4fb85-nf6tq\" (UID: \"1ec6aefb-824e-4248-ac00-c1d0b526edc6\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.535730 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.537138 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.542259 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ghzdx" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.542625 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.562928 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.577650 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.579042 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.584607 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.585702 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cxxp7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.585909 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.591114 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.591242 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jlx4p" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.596516 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.596631 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrtf\" (UniqueName: \"kubernetes.io/projected/48b3567f-5b1a-4f14-891c-775c05e2d768-kube-api-access-9nrtf\") pod \"heat-operator-controller-manager-5b77f656f-t5knb\" (UID: \"48b3567f-5b1a-4f14-891c-775c05e2d768\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.596700 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb2dg\" (UniqueName: \"kubernetes.io/projected/0d3d657c-e179-43c7-abca-c37f8396d1cd-kube-api-access-gb2dg\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.596752 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqzls\" (UniqueName: \"kubernetes.io/projected/00a7db78-81a7-481d-a20e-135c60e139e3-kube-api-access-bqzls\") pod \"designate-operator-controller-manager-955677c94-lzn84\" (UID: \"00a7db78-81a7-481d-a20e-135c60e139e3\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.596775 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kj2\" (UniqueName: \"kubernetes.io/projected/72f1d28e-26ff-43d3-bd93-54c21d9cdd70-kube-api-access-l9kj2\") pod \"horizon-operator-controller-manager-5d494799bf-cctnq\" (UID: \"72f1d28e-26ff-43d3-bd93-54c21d9cdd70\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.596830 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtkkj\" (UniqueName: \"kubernetes.io/projected/da827172-6e3a-42a7-814c-cdfcc18d48d6-kube-api-access-mtkkj\") pod \"glance-operator-controller-manager-589cbd6b5b-xvxp7\" (UID: \"da827172-6e3a-42a7-814c-cdfcc18d48d6\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.601241 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.634991 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqzls\" (UniqueName: \"kubernetes.io/projected/00a7db78-81a7-481d-a20e-135c60e139e3-kube-api-access-bqzls\") pod \"designate-operator-controller-manager-955677c94-lzn84\" (UID: \"00a7db78-81a7-481d-a20e-135c60e139e3\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.638641 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtkkj\" (UniqueName: \"kubernetes.io/projected/da827172-6e3a-42a7-814c-cdfcc18d48d6-kube-api-access-mtkkj\") pod \"glance-operator-controller-manager-589cbd6b5b-xvxp7\" (UID: \"da827172-6e3a-42a7-814c-cdfcc18d48d6\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.648915 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.651475 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrtf\" (UniqueName: \"kubernetes.io/projected/48b3567f-5b1a-4f14-891c-775c05e2d768-kube-api-access-9nrtf\") pod \"heat-operator-controller-manager-5b77f656f-t5knb\" (UID: \"48b3567f-5b1a-4f14-891c-775c05e2d768\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.653382 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec52086d-bc5a-48eb-8d51-d4330f757a18" path="/var/lib/kubelet/pods/ec52086d-bc5a-48eb-8d51-d4330f757a18/volumes" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.654230 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.655161 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.663903 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.663940 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.664751 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.664894 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.665268 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.665490 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.666762 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nqzsf" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.670377 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.670805 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.676176 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.676621 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tnfqx" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.676800 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dt2hz" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.679006 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2gt5j" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.689328 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.698882 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb2dg\" (UniqueName: \"kubernetes.io/projected/0d3d657c-e179-43c7-abca-c37f8396d1cd-kube-api-access-gb2dg\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.698942 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9kj2\" (UniqueName: \"kubernetes.io/projected/72f1d28e-26ff-43d3-bd93-54c21d9cdd70-kube-api-access-l9kj2\") pod \"horizon-operator-controller-manager-5d494799bf-cctnq\" (UID: \"72f1d28e-26ff-43d3-bd93-54c21d9cdd70\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.698995 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.699056 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjk59\" (UniqueName: \"kubernetes.io/projected/e1668e7f-55bb-415c-b378-1c70483b30a6-kube-api-access-gjk59\") pod \"ironic-operator-controller-manager-67cb4dc6d4-9zpxb\" (UID: \"e1668e7f-55bb-415c-b378-1c70483b30a6\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.699081 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5rcx\" (UniqueName: \"kubernetes.io/projected/e5edd26f-9ffb-4be8-86c1-99d32e812816-kube-api-access-f5rcx\") pod \"keystone-operator-controller-manager-7b4567c7cf-fchv4\" (UID: \"e5edd26f-9ffb-4be8-86c1-99d32e812816\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:40:41 crc kubenswrapper[4983]: E1125 20:40:41.700127 4983 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:41 crc kubenswrapper[4983]: E1125 20:40:41.700198 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert podName:0d3d657c-e179-43c7-abca-c37f8396d1cd nodeName:}" failed. No retries permitted until 2025-11-25 20:40:42.200175189 +0000 UTC m=+823.312708581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert") pod "infra-operator-controller-manager-57548d458d-qlm9k" (UID: "0d3d657c-e179-43c7-abca-c37f8396d1cd") : secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.703703 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.716600 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.717683 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.726727 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hq8ls" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.735645 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.736994 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.751497 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9kj2\" (UniqueName: \"kubernetes.io/projected/72f1d28e-26ff-43d3-bd93-54c21d9cdd70-kube-api-access-l9kj2\") pod \"horizon-operator-controller-manager-5d494799bf-cctnq\" (UID: \"72f1d28e-26ff-43d3-bd93-54c21d9cdd70\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.753086 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.755381 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb2dg\" (UniqueName: \"kubernetes.io/projected/0d3d657c-e179-43c7-abca-c37f8396d1cd-kube-api-access-gb2dg\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.760191 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.770374 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.783287 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.783827 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.784191 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.784448 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.784811 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.788798 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.797591 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6c5kw" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.797742 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lnrps" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.798030 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-47hsh" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.798172 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801661 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltk8n\" (UniqueName: \"kubernetes.io/projected/badb10c7-4c8c-42c4-b481-221377fa7255-kube-api-access-ltk8n\") pod \"neutron-operator-controller-manager-6fdcddb789-ljpb8\" (UID: \"badb10c7-4c8c-42c4-b481-221377fa7255\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801734 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdt7\" (UniqueName: \"kubernetes.io/projected/2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98-kube-api-access-ncdt7\") pod \"manila-operator-controller-manager-5d499bf58b-f8bh4\" (UID: \"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801766 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grjj9\" (UniqueName: \"kubernetes.io/projected/9d7c78e4-4890-4527-9db4-131842750615-kube-api-access-grjj9\") pod \"nova-operator-controller-manager-79556f57fc-dj7nt\" (UID: \"9d7c78e4-4890-4527-9db4-131842750615\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801789 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjk59\" (UniqueName: \"kubernetes.io/projected/e1668e7f-55bb-415c-b378-1c70483b30a6-kube-api-access-gjk59\") pod \"ironic-operator-controller-manager-67cb4dc6d4-9zpxb\" (UID: \"e1668e7f-55bb-415c-b378-1c70483b30a6\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801811 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5rcx\" (UniqueName: \"kubernetes.io/projected/e5edd26f-9ffb-4be8-86c1-99d32e812816-kube-api-access-f5rcx\") pod \"keystone-operator-controller-manager-7b4567c7cf-fchv4\" (UID: \"e5edd26f-9ffb-4be8-86c1-99d32e812816\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801839 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w48r\" (UniqueName: \"kubernetes.io/projected/a096f840-35b3-48c1-8c0e-762b67b8bde0-kube-api-access-8w48r\") pod \"octavia-operator-controller-manager-64cdc6ff96-p8q9g\" (UID: \"a096f840-35b3-48c1-8c0e-762b67b8bde0\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.801889 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhg75\" (UniqueName: \"kubernetes.io/projected/afff7723-36e3-42ae-9fac-9f8fdb86d839-kube-api-access-jhg75\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-rwkrr\" (UID: \"afff7723-36e3-42ae-9fac-9f8fdb86d839\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.802531 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.812761 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.824268 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjk59\" (UniqueName: \"kubernetes.io/projected/e1668e7f-55bb-415c-b378-1c70483b30a6-kube-api-access-gjk59\") pod \"ironic-operator-controller-manager-67cb4dc6d4-9zpxb\" (UID: \"e1668e7f-55bb-415c-b378-1c70483b30a6\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.830889 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5rcx\" (UniqueName: \"kubernetes.io/projected/e5edd26f-9ffb-4be8-86c1-99d32e812816-kube-api-access-f5rcx\") pod \"keystone-operator-controller-manager-7b4567c7cf-fchv4\" (UID: \"e5edd26f-9ffb-4be8-86c1-99d32e812816\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.831366 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.888601 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.890101 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.899978 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m68cm" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.905807 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-4c95t"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.907354 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.909886 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.909933 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhg75\" (UniqueName: \"kubernetes.io/projected/afff7723-36e3-42ae-9fac-9f8fdb86d839-kube-api-access-jhg75\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-rwkrr\" (UID: \"afff7723-36e3-42ae-9fac-9f8fdb86d839\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.909952 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcj8\" (UniqueName: \"kubernetes.io/projected/4743af06-44e2-438a-82b7-bf32b0f5ca03-kube-api-access-jbcj8\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.909998 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wn6k\" (UniqueName: \"kubernetes.io/projected/d7302bdd-d74f-4d95-a354-42fcd52bf22e-kube-api-access-6wn6k\") pod \"ovn-operator-controller-manager-56897c768d-zc5rq\" (UID: \"d7302bdd-d74f-4d95-a354-42fcd52bf22e\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.910020 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptd2q\" (UniqueName: \"kubernetes.io/projected/64141c1d-799a-4d72-aa99-e54975052879-kube-api-access-ptd2q\") pod \"placement-operator-controller-manager-57988cc5b5-mhjtj\" (UID: \"64141c1d-799a-4d72-aa99-e54975052879\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.910075 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltk8n\" (UniqueName: \"kubernetes.io/projected/badb10c7-4c8c-42c4-b481-221377fa7255-kube-api-access-ltk8n\") pod \"neutron-operator-controller-manager-6fdcddb789-ljpb8\" (UID: \"badb10c7-4c8c-42c4-b481-221377fa7255\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.910100 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncdt7\" (UniqueName: \"kubernetes.io/projected/2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98-kube-api-access-ncdt7\") pod \"manila-operator-controller-manager-5d499bf58b-f8bh4\" (UID: \"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.910130 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grjj9\" (UniqueName: \"kubernetes.io/projected/9d7c78e4-4890-4527-9db4-131842750615-kube-api-access-grjj9\") pod \"nova-operator-controller-manager-79556f57fc-dj7nt\" (UID: \"9d7c78e4-4890-4527-9db4-131842750615\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.910157 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w48r\" (UniqueName: \"kubernetes.io/projected/a096f840-35b3-48c1-8c0e-762b67b8bde0-kube-api-access-8w48r\") pod \"octavia-operator-controller-manager-64cdc6ff96-p8q9g\" (UID: \"a096f840-35b3-48c1-8c0e-762b67b8bde0\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.927623 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.930948 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-78sw9" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.952082 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grjj9\" (UniqueName: \"kubernetes.io/projected/9d7c78e4-4890-4527-9db4-131842750615-kube-api-access-grjj9\") pod \"nova-operator-controller-manager-79556f57fc-dj7nt\" (UID: \"9d7c78e4-4890-4527-9db4-131842750615\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.957082 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncdt7\" (UniqueName: \"kubernetes.io/projected/2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98-kube-api-access-ncdt7\") pod \"manila-operator-controller-manager-5d499bf58b-f8bh4\" (UID: \"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.961542 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltk8n\" (UniqueName: \"kubernetes.io/projected/badb10c7-4c8c-42c4-b481-221377fa7255-kube-api-access-ltk8n\") pod \"neutron-operator-controller-manager-6fdcddb789-ljpb8\" (UID: \"badb10c7-4c8c-42c4-b481-221377fa7255\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.961830 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w48r\" (UniqueName: \"kubernetes.io/projected/a096f840-35b3-48c1-8c0e-762b67b8bde0-kube-api-access-8w48r\") pod \"octavia-operator-controller-manager-64cdc6ff96-p8q9g\" (UID: \"a096f840-35b3-48c1-8c0e-762b67b8bde0\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.962200 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhg75\" (UniqueName: \"kubernetes.io/projected/afff7723-36e3-42ae-9fac-9f8fdb86d839-kube-api-access-jhg75\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-rwkrr\" (UID: \"afff7723-36e3-42ae-9fac-9f8fdb86d839\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.982759 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt"] Nov 25 20:40:41 crc kubenswrapper[4983]: I1125 20:40:41.990702 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:41.999276 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4z5vp" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.020543 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wn6k\" (UniqueName: \"kubernetes.io/projected/d7302bdd-d74f-4d95-a354-42fcd52bf22e-kube-api-access-6wn6k\") pod \"ovn-operator-controller-manager-56897c768d-zc5rq\" (UID: \"d7302bdd-d74f-4d95-a354-42fcd52bf22e\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.020806 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drxg4\" (UniqueName: \"kubernetes.io/projected/5b14316c-9639-4934-a5e9-5381d2797ef5-kube-api-access-drxg4\") pod \"swift-operator-controller-manager-d77b94747-4c95t\" (UID: \"5b14316c-9639-4934-a5e9-5381d2797ef5\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.020931 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptd2q\" (UniqueName: \"kubernetes.io/projected/64141c1d-799a-4d72-aa99-e54975052879-kube-api-access-ptd2q\") pod \"placement-operator-controller-manager-57988cc5b5-mhjtj\" (UID: \"64141c1d-799a-4d72-aa99-e54975052879\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.021106 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2kn\" (UniqueName: \"kubernetes.io/projected/92f1d8fa-69cf-49c3-a616-82a185ff8dd5-kube-api-access-8g2kn\") pod \"telemetry-operator-controller-manager-b7bb74d9f-m9bbx\" (UID: \"92f1d8fa-69cf-49c3-a616-82a185ff8dd5\") " pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.021238 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.021310 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbcj8\" (UniqueName: \"kubernetes.io/projected/4743af06-44e2-438a-82b7-bf32b0f5ca03-kube-api-access-jbcj8\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.024111 4983 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.024412 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert podName:4743af06-44e2-438a-82b7-bf32b0f5ca03 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:42.524392396 +0000 UTC m=+823.636925788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" (UID: "4743af06-44e2-438a-82b7-bf32b0f5ca03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.024752 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-4c95t"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.058737 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.059512 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.060132 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.063960 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.067906 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptd2q\" (UniqueName: \"kubernetes.io/projected/64141c1d-799a-4d72-aa99-e54975052879-kube-api-access-ptd2q\") pod \"placement-operator-controller-manager-57988cc5b5-mhjtj\" (UID: \"64141c1d-799a-4d72-aa99-e54975052879\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.069700 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbcj8\" (UniqueName: \"kubernetes.io/projected/4743af06-44e2-438a-82b7-bf32b0f5ca03-kube-api-access-jbcj8\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.075185 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.082547 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.084247 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wn6k\" (UniqueName: \"kubernetes.io/projected/d7302bdd-d74f-4d95-a354-42fcd52bf22e-kube-api-access-6wn6k\") pod \"ovn-operator-controller-manager-56897c768d-zc5rq\" (UID: \"d7302bdd-d74f-4d95-a354-42fcd52bf22e\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.086040 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.087147 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.095245 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x68nk" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.113834 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.122096 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drxg4\" (UniqueName: \"kubernetes.io/projected/5b14316c-9639-4934-a5e9-5381d2797ef5-kube-api-access-drxg4\") pod \"swift-operator-controller-manager-d77b94747-4c95t\" (UID: \"5b14316c-9639-4934-a5e9-5381d2797ef5\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.122147 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpqw8\" (UniqueName: \"kubernetes.io/projected/ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042-kube-api-access-bpqw8\") pod \"test-operator-controller-manager-5cd6c7f4c8-lr7wt\" (UID: \"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.122207 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g2kn\" (UniqueName: \"kubernetes.io/projected/92f1d8fa-69cf-49c3-a616-82a185ff8dd5-kube-api-access-8g2kn\") pod \"telemetry-operator-controller-manager-b7bb74d9f-m9bbx\" (UID: \"92f1d8fa-69cf-49c3-a616-82a185ff8dd5\") " pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.137755 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.139227 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.148200 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.149483 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.150450 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.150609 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.150970 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bs7sb" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.160845 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drxg4\" (UniqueName: \"kubernetes.io/projected/5b14316c-9639-4934-a5e9-5381d2797ef5-kube-api-access-drxg4\") pod \"swift-operator-controller-manager-d77b94747-4c95t\" (UID: \"5b14316c-9639-4934-a5e9-5381d2797ef5\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.162324 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.168452 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.171626 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.175305 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-frx5n" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.176819 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g2kn\" (UniqueName: \"kubernetes.io/projected/92f1d8fa-69cf-49c3-a616-82a185ff8dd5-kube-api-access-8g2kn\") pod \"telemetry-operator-controller-manager-b7bb74d9f-m9bbx\" (UID: \"92f1d8fa-69cf-49c3-a616-82a185ff8dd5\") " pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.178710 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.209654 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224123 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224197 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224218 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrwb\" (UniqueName: \"kubernetes.io/projected/f32095da-1fdc-4d52-b082-98b39652cdc6-kube-api-access-rbrwb\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224244 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j629\" (UniqueName: \"kubernetes.io/projected/1e439ca1-98f3-4650-96da-1e4c1b2da37e-kube-api-access-9j629\") pod \"watcher-operator-controller-manager-656dcb59d4-rpfhz\" (UID: \"1e439ca1-98f3-4650-96da-1e4c1b2da37e\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224274 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpqw8\" (UniqueName: \"kubernetes.io/projected/ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042-kube-api-access-bpqw8\") pod \"test-operator-controller-manager-5cd6c7f4c8-lr7wt\" (UID: \"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.224304 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpwl\" (UniqueName: \"kubernetes.io/projected/ff284fea-7792-40e1-8ede-f52412a6c014-kube-api-access-bgpwl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bwf7d\" (UID: \"ff284fea-7792-40e1-8ede-f52412a6c014\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.227140 4983 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.227227 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert podName:0d3d657c-e179-43c7-abca-c37f8396d1cd nodeName:}" failed. No retries permitted until 2025-11-25 20:40:43.227204459 +0000 UTC m=+824.339737841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert") pod "infra-operator-controller-manager-57548d458d-qlm9k" (UID: "0d3d657c-e179-43c7-abca-c37f8396d1cd") : secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.236461 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.253406 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.256115 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpqw8\" (UniqueName: \"kubernetes.io/projected/ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042-kube-api-access-bpqw8\") pod \"test-operator-controller-manager-5cd6c7f4c8-lr7wt\" (UID: \"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:40:42 crc kubenswrapper[4983]: W1125 20:40:42.293751 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ec6aefb_824e_4248_ac00_c1d0b526edc6.slice/crio-9dc8f13d952aad5ebb4655119dfc714c6a42d85c7b82b03b4ce9d9797b2558e7 WatchSource:0}: Error finding container 9dc8f13d952aad5ebb4655119dfc714c6a42d85c7b82b03b4ce9d9797b2558e7: Status 404 returned error can't find the container with id 9dc8f13d952aad5ebb4655119dfc714c6a42d85c7b82b03b4ce9d9797b2558e7 Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.294117 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.294250 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.321427 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.325580 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.325678 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.325771 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrwb\" (UniqueName: \"kubernetes.io/projected/f32095da-1fdc-4d52-b082-98b39652cdc6-kube-api-access-rbrwb\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.325802 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j629\" (UniqueName: \"kubernetes.io/projected/1e439ca1-98f3-4650-96da-1e4c1b2da37e-kube-api-access-9j629\") pod \"watcher-operator-controller-manager-656dcb59d4-rpfhz\" (UID: \"1e439ca1-98f3-4650-96da-1e4c1b2da37e\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.325839 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpwl\" (UniqueName: \"kubernetes.io/projected/ff284fea-7792-40e1-8ede-f52412a6c014-kube-api-access-bgpwl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bwf7d\" (UID: \"ff284fea-7792-40e1-8ede-f52412a6c014\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.326322 4983 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.326390 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:42.826373121 +0000 UTC m=+823.938906513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "metrics-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.326459 4983 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.326482 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:42.826475373 +0000 UTC m=+823.939008765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.347898 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j629\" (UniqueName: \"kubernetes.io/projected/1e439ca1-98f3-4650-96da-1e4c1b2da37e-kube-api-access-9j629\") pod \"watcher-operator-controller-manager-656dcb59d4-rpfhz\" (UID: \"1e439ca1-98f3-4650-96da-1e4c1b2da37e\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.350678 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrwb\" (UniqueName: \"kubernetes.io/projected/f32095da-1fdc-4d52-b082-98b39652cdc6-kube-api-access-rbrwb\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.350930 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpwl\" (UniqueName: \"kubernetes.io/projected/ff284fea-7792-40e1-8ede-f52412a6c014-kube-api-access-bgpwl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-bwf7d\" (UID: \"ff284fea-7792-40e1-8ede-f52412a6c014\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.421108 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.483128 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.522970 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.530024 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.530180 4983 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.530226 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert podName:4743af06-44e2-438a-82b7-bf32b0f5ca03 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:43.530212171 +0000 UTC m=+824.642745563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" (UID: "4743af06-44e2-438a-82b7-bf32b0f5ca03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.573127 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-lzn84"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.840305 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.840923 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.840519 4983 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.841051 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:43.841005499 +0000 UTC m=+824.953538891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "metrics-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.841144 4983 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: E1125 20:40:42.841239 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:43.841211845 +0000 UTC m=+824.953745417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "webhook-server-cert" not found Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.913643 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.931930 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.944693 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.954840 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq"] Nov 25 20:40:42 crc kubenswrapper[4983]: I1125 20:40:42.960139 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.054038 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr"] Nov 25 20:40:43 crc kubenswrapper[4983]: W1125 20:40:43.062418 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafff7723_36e3_42ae_9fac_9f8fdb86d839.slice/crio-cdb0d8025b9f4843e80d29779cb171ba6ccd60ca48a328fc10ccf3f02e69900c WatchSource:0}: Error finding container cdb0d8025b9f4843e80d29779cb171ba6ccd60ca48a328fc10ccf3f02e69900c: Status 404 returned error can't find the container with id cdb0d8025b9f4843e80d29779cb171ba6ccd60ca48a328fc10ccf3f02e69900c Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.255300 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.255620 4983 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.255760 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert podName:0d3d657c-e179-43c7-abca-c37f8396d1cd nodeName:}" failed. No retries permitted until 2025-11-25 20:40:45.255720037 +0000 UTC m=+826.368253439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert") pod "infra-operator-controller-manager-57548d458d-qlm9k" (UID: "0d3d657c-e179-43c7-abca-c37f8396d1cd") : secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.307683 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" event={"ID":"e5edd26f-9ffb-4be8-86c1-99d32e812816","Type":"ContainerStarted","Data":"7a05a3d93005bc17f8a71a92f798b10f95a83d8d0c155afbed2a00accdf6d99c"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.309346 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" event={"ID":"da827172-6e3a-42a7-814c-cdfcc18d48d6","Type":"ContainerStarted","Data":"db35c2d4ce7b5a87d3d13148cd71c639340e0076518fb48efcc64c0d9f249ee3"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.311260 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" event={"ID":"cf765330-a0f9-4603-a92b-4aec8feaeafb","Type":"ContainerStarted","Data":"b9851c056979ba0a872801ebb1aa72ac095bfcada066570ca0461b8d39e3c701"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.317141 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerStarted","Data":"12ea942310f70bb641f743d11a6b9bccdc9e88d354ed952309121b94242f458b"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.319216 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" event={"ID":"72f1d28e-26ff-43d3-bd93-54c21d9cdd70","Type":"ContainerStarted","Data":"a57d68de07357e1e4166b07e184db45f05718c079ef50705bbd7b9a8fe5aa425"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.320580 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" event={"ID":"00a7db78-81a7-481d-a20e-135c60e139e3","Type":"ContainerStarted","Data":"10946f84215e9af95382788d0e84ae89578f9142e9fc3853312b5b6477e6a062"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.321721 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" event={"ID":"afff7723-36e3-42ae-9fac-9f8fdb86d839","Type":"ContainerStarted","Data":"cdb0d8025b9f4843e80d29779cb171ba6ccd60ca48a328fc10ccf3f02e69900c"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.323120 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" event={"ID":"48b3567f-5b1a-4f14-891c-775c05e2d768","Type":"ContainerStarted","Data":"75427680753ab1ec66bc6fbb673e4ceeb4fa1cd3a91db086058ea337ae3e71d4"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.324329 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" event={"ID":"1ec6aefb-824e-4248-ac00-c1d0b526edc6","Type":"ContainerStarted","Data":"9dc8f13d952aad5ebb4655119dfc714c6a42d85c7b82b03b4ce9d9797b2558e7"} Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.414611 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.428486 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.466970 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.479416 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.485187 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.491702 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.498604 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.504657 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt"] Nov 25 20:40:43 crc kubenswrapper[4983]: W1125 20:40:43.508336 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbadb10c7_4c8c_42c4_b481_221377fa7255.slice/crio-e19e2001e6eeb213a89727f44b1b7f75390909459516281b85900735434c5976 WatchSource:0}: Error finding container e19e2001e6eeb213a89727f44b1b7f75390909459516281b85900735434c5976: Status 404 returned error can't find the container with id e19e2001e6eeb213a89727f44b1b7f75390909459516281b85900735434c5976 Nov 25 20:40:43 crc kubenswrapper[4983]: W1125 20:40:43.512783 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d7c78e4_4890_4527_9db4_131842750615.slice/crio-7b6462010c2ad87a037fae8daa9efe627a3ae8a58665f02a1508d66a33f0aafa WatchSource:0}: Error finding container 7b6462010c2ad87a037fae8daa9efe627a3ae8a58665f02a1508d66a33f0aafa: Status 404 returned error can't find the container with id 7b6462010c2ad87a037fae8daa9efe627a3ae8a58665f02a1508d66a33f0aafa Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.514865 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d"] Nov 25 20:40:43 crc kubenswrapper[4983]: W1125 20:40:43.516023 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bb3e4e5_dd92_4f7d_b69a_b807d19a9e98.slice/crio-f2a602ff056ba7123a1a914d00d2edd376e567e4607fe787279572243d2b859f WatchSource:0}: Error finding container f2a602ff056ba7123a1a914d00d2edd376e567e4607fe787279572243d2b859f: Status 404 returned error can't find the container with id f2a602ff056ba7123a1a914d00d2edd376e567e4607fe787279572243d2b859f Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.519508 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx"] Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.522210 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-4c95t"] Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.561412 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9j629,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-rpfhz_openstack-operators(1e439ca1-98f3-4650-96da-1e4c1b2da37e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.563237 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.563383 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bwf7d_openstack-operators(ff284fea-7792-40e1-8ede-f52412a6c014): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.563517 4983 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.563633 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert podName:4743af06-44e2-438a-82b7-bf32b0f5ca03 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:45.563617139 +0000 UTC m=+826.676150531 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" (UID: "4743af06-44e2-438a-82b7-bf32b0f5ca03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.564719 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podUID="ff284fea-7792-40e1-8ede-f52412a6c014" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.568177 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9j629,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-rpfhz_openstack-operators(1e439ca1-98f3-4650-96da-1e4c1b2da37e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.569280 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" podUID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" Nov 25 20:40:43 crc kubenswrapper[4983]: W1125 20:40:43.569733 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca7c2bed_d9e1_4eb9_b50e_fee1d2eac042.slice/crio-dc82b4530679fa7d34e7fb8b674d64f4853167cb3470dc2bb390cb316489ba15 WatchSource:0}: Error finding container dc82b4530679fa7d34e7fb8b674d64f4853167cb3470dc2bb390cb316489ba15: Status 404 returned error can't find the container with id dc82b4530679fa7d34e7fb8b674d64f4853167cb3470dc2bb390cb316489ba15 Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.572312 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drxg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-4c95t_openstack-operators(5b14316c-9639-4934-a5e9-5381d2797ef5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.576766 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drxg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-4c95t_openstack-operators(5b14316c-9639-4934-a5e9-5381d2797ef5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.578381 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podUID="5b14316c-9639-4934-a5e9-5381d2797ef5" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.601574 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpqw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-lr7wt_openstack-operators(ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: W1125 20:40:43.604155 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92f1d8fa_69cf_49c3_a616_82a185ff8dd5.slice/crio-a294cd808cfec0b56ef32f83f597b023b4ae8b0bb1697b5c6eda2acd21428194 WatchSource:0}: Error finding container a294cd808cfec0b56ef32f83f597b023b4ae8b0bb1697b5c6eda2acd21428194: Status 404 returned error can't find the container with id a294cd808cfec0b56ef32f83f597b023b4ae8b0bb1697b5c6eda2acd21428194 Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.608139 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpqw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-lr7wt_openstack-operators(ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.609402 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podUID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.867713 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:43 crc kubenswrapper[4983]: I1125 20:40:43.868332 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.868635 4983 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.868719 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:45.868693747 +0000 UTC m=+826.981227149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "webhook-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.869205 4983 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 20:40:43 crc kubenswrapper[4983]: E1125 20:40:43.869240 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:45.869229491 +0000 UTC m=+826.981762883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "metrics-server-cert" not found Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.359606 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" event={"ID":"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98","Type":"ContainerStarted","Data":"f2a602ff056ba7123a1a914d00d2edd376e567e4607fe787279572243d2b859f"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.362937 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" event={"ID":"92f1d8fa-69cf-49c3-a616-82a185ff8dd5","Type":"ContainerStarted","Data":"a294cd808cfec0b56ef32f83f597b023b4ae8b0bb1697b5c6eda2acd21428194"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.364998 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" event={"ID":"ff284fea-7792-40e1-8ede-f52412a6c014","Type":"ContainerStarted","Data":"6d322efeeb93a9f0774ec98e9fb7bc900811fcc522ca10cb05bb5d5acded4f28"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.368707 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" event={"ID":"9d7c78e4-4890-4527-9db4-131842750615","Type":"ContainerStarted","Data":"7b6462010c2ad87a037fae8daa9efe627a3ae8a58665f02a1508d66a33f0aafa"} Nov 25 20:40:44 crc kubenswrapper[4983]: E1125 20:40:44.369364 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podUID="ff284fea-7792-40e1-8ede-f52412a6c014" Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.370704 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" event={"ID":"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042","Type":"ContainerStarted","Data":"dc82b4530679fa7d34e7fb8b674d64f4853167cb3470dc2bb390cb316489ba15"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.375255 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" event={"ID":"64141c1d-799a-4d72-aa99-e54975052879","Type":"ContainerStarted","Data":"17a6547a6ff67e3c32858ad13595b38932df68cb6dd55d7b704736040489addf"} Nov 25 20:40:44 crc kubenswrapper[4983]: E1125 20:40:44.375467 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podUID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.382980 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" event={"ID":"d7302bdd-d74f-4d95-a354-42fcd52bf22e","Type":"ContainerStarted","Data":"7fe6254de1362f36956d4b99f74054bdc724d530da278fbdaf60184204f68994"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.388749 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerStarted","Data":"6f20f75bedf768376343e21ed6e88ab5e7e125311be5c91a360ae7ee8b8c536b"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.390616 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" event={"ID":"badb10c7-4c8c-42c4-b481-221377fa7255","Type":"ContainerStarted","Data":"e19e2001e6eeb213a89727f44b1b7f75390909459516281b85900735434c5976"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.392157 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" event={"ID":"1e439ca1-98f3-4650-96da-1e4c1b2da37e","Type":"ContainerStarted","Data":"1434f68da7fbd71211ead2ccf5f6a24daf3d454331f50109d32df53972616a13"} Nov 25 20:40:44 crc kubenswrapper[4983]: I1125 20:40:44.393247 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" event={"ID":"5b14316c-9639-4934-a5e9-5381d2797ef5","Type":"ContainerStarted","Data":"271bd917053dd232b8d72e65cefdb1d89aa8de864ee19542de16a32b9f8a2de4"} Nov 25 20:40:44 crc kubenswrapper[4983]: E1125 20:40:44.396487 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" podUID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" Nov 25 20:40:44 crc kubenswrapper[4983]: E1125 20:40:44.396754 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podUID="5b14316c-9639-4934-a5e9-5381d2797ef5" Nov 25 20:40:45 crc kubenswrapper[4983]: I1125 20:40:45.301256 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.301514 4983 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.301647 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert podName:0d3d657c-e179-43c7-abca-c37f8396d1cd nodeName:}" failed. No retries permitted until 2025-11-25 20:40:49.301614139 +0000 UTC m=+830.414147651 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert") pod "infra-operator-controller-manager-57548d458d-qlm9k" (UID: "0d3d657c-e179-43c7-abca-c37f8396d1cd") : secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.401992 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podUID="ff284fea-7792-40e1-8ede-f52412a6c014" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.402860 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podUID="5b14316c-9639-4934-a5e9-5381d2797ef5" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.402996 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" podUID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.403767 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podUID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" Nov 25 20:40:45 crc kubenswrapper[4983]: I1125 20:40:45.605222 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.605502 4983 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.605644 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert podName:4743af06-44e2-438a-82b7-bf32b0f5ca03 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:49.605619088 +0000 UTC m=+830.718152480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" (UID: "4743af06-44e2-438a-82b7-bf32b0f5ca03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: I1125 20:40:45.910917 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.911223 4983 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: I1125 20:40:45.911255 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.911326 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:49.911304281 +0000 UTC m=+831.023837673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "metrics-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.911507 4983 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 20:40:45 crc kubenswrapper[4983]: E1125 20:40:45.911624 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:49.911603399 +0000 UTC m=+831.024136781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "webhook-server-cert" not found Nov 25 20:40:49 crc kubenswrapper[4983]: I1125 20:40:49.379532 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:49 crc kubenswrapper[4983]: E1125 20:40:49.380114 4983 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:49 crc kubenswrapper[4983]: E1125 20:40:49.380174 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert podName:0d3d657c-e179-43c7-abca-c37f8396d1cd nodeName:}" failed. No retries permitted until 2025-11-25 20:40:57.380156489 +0000 UTC m=+838.492689881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert") pod "infra-operator-controller-manager-57548d458d-qlm9k" (UID: "0d3d657c-e179-43c7-abca-c37f8396d1cd") : secret "infra-operator-webhook-server-cert" not found Nov 25 20:40:49 crc kubenswrapper[4983]: I1125 20:40:49.687134 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:49 crc kubenswrapper[4983]: E1125 20:40:49.687373 4983 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:49 crc kubenswrapper[4983]: E1125 20:40:49.687422 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert podName:4743af06-44e2-438a-82b7-bf32b0f5ca03 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:57.687407454 +0000 UTC m=+838.799940846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" (UID: "4743af06-44e2-438a-82b7-bf32b0f5ca03") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 20:40:50 crc kubenswrapper[4983]: I1125 20:40:50.005255 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:50 crc kubenswrapper[4983]: I1125 20:40:50.005764 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:50 crc kubenswrapper[4983]: E1125 20:40:50.005992 4983 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 20:40:50 crc kubenswrapper[4983]: E1125 20:40:50.006133 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:58.006111653 +0000 UTC m=+839.118645045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "metrics-server-cert" not found Nov 25 20:40:50 crc kubenswrapper[4983]: E1125 20:40:50.005988 4983 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 20:40:50 crc kubenswrapper[4983]: E1125 20:40:50.006279 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs podName:f32095da-1fdc-4d52-b082-98b39652cdc6 nodeName:}" failed. No retries permitted until 2025-11-25 20:40:58.006271668 +0000 UTC m=+839.118805060 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs") pod "openstack-operator-controller-manager-5cf7cd9d4-bwfnd" (UID: "f32095da-1fdc-4d52-b082-98b39652cdc6") : secret "webhook-server-cert" not found Nov 25 20:40:55 crc kubenswrapper[4983]: E1125 20:40:55.278684 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b" Nov 25 20:40:55 crc kubenswrapper[4983]: E1125 20:40:55.279762 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:ec4e5c911c1d0f1ea211a04b251a9d2e95b69d141c1caf07a0381693b2d6368b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqzls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-955677c94-lzn84_openstack-operators(00a7db78-81a7-481d-a20e-135c60e139e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.711768 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f88c9"] Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.713957 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.738861 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f88c9"] Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.814465 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-catalog-content\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.814636 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-477xl\" (UniqueName: \"kubernetes.io/projected/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-kube-api-access-477xl\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.814676 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-utilities\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.916421 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-catalog-content\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.916532 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-477xl\" (UniqueName: \"kubernetes.io/projected/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-kube-api-access-477xl\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.916565 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-utilities\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.917093 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-utilities\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.917497 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-catalog-content\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:55 crc kubenswrapper[4983]: I1125 20:40:55.954839 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-477xl\" (UniqueName: \"kubernetes.io/projected/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-kube-api-access-477xl\") pod \"certified-operators-f88c9\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:56 crc kubenswrapper[4983]: E1125 20:40:56.016693 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7" Nov 25 20:40:56 crc kubenswrapper[4983]: E1125 20:40:56.016990 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-grjj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-dj7nt_openstack-operators(9d7c78e4-4890-4527-9db4-131842750615): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:40:56 crc kubenswrapper[4983]: I1125 20:40:56.038040 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:40:56 crc kubenswrapper[4983]: E1125 20:40:56.805229 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ca332e48d07f932e470177e48dba9332848a1d14c857cff6f9bfb1adc1998482" Nov 25 20:40:56 crc kubenswrapper[4983]: E1125 20:40:56.805538 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ca332e48d07f932e470177e48dba9332848a1d14c857cff6f9bfb1adc1998482,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgzzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6b7f75547b-b9lnt_openstack-operators(cf765330-a0f9-4603-a92b-4aec8feaeafb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:40:57 crc kubenswrapper[4983]: E1125 20:40:57.441752 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c" Nov 25 20:40:57 crc kubenswrapper[4983]: E1125 20:40:57.441968 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ddc8a82f05930db8ee7a8d6d189b5a66373060656e4baf71ac302f89c477da4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8w48r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-64cdc6ff96-p8q9g_openstack-operators(a096f840-35b3-48c1-8c0e-762b67b8bde0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:40:57 crc kubenswrapper[4983]: I1125 20:40:57.443871 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:57 crc kubenswrapper[4983]: I1125 20:40:57.451379 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0d3d657c-e179-43c7-abca-c37f8396d1cd-cert\") pod \"infra-operator-controller-manager-57548d458d-qlm9k\" (UID: \"0d3d657c-e179-43c7-abca-c37f8396d1cd\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:57 crc kubenswrapper[4983]: I1125 20:40:57.493854 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:40:57 crc kubenswrapper[4983]: I1125 20:40:57.747760 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:57 crc kubenswrapper[4983]: I1125 20:40:57.760480 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4743af06-44e2-438a-82b7-bf32b0f5ca03-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg\" (UID: \"4743af06-44e2-438a-82b7-bf32b0f5ca03\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:57 crc kubenswrapper[4983]: I1125 20:40:57.782266 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:40:57 crc kubenswrapper[4983]: E1125 20:40:57.969337 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711" Nov 25 20:40:57 crc kubenswrapper[4983]: E1125 20:40:57.969595 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5rcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-fchv4_openstack-operators(e5edd26f-9ffb-4be8-86c1-99d32e812816): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.055377 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.056361 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.059628 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-webhook-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.060098 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f32095da-1fdc-4d52-b082-98b39652cdc6-metrics-certs\") pod \"openstack-operator-controller-manager-5cf7cd9d4-bwfnd\" (UID: \"f32095da-1fdc-4d52-b082-98b39652cdc6\") " pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.102835 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.502550 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k"] Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.559185 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" event={"ID":"48b3567f-5b1a-4f14-891c-775c05e2d768","Type":"ContainerStarted","Data":"3e9d64d65e58ea7df2d9124a0951eb9ee7d90f8ce11b33d384241318847d1139"} Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.562048 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" event={"ID":"1ec6aefb-824e-4248-ac00-c1d0b526edc6","Type":"ContainerStarted","Data":"f3c6fc8c1840d8e98b8185fd5d3f03c7e641009cf3daa9c8f72ddb04f016b9e4"} Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.563930 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" event={"ID":"da827172-6e3a-42a7-814c-cdfcc18d48d6","Type":"ContainerStarted","Data":"43d73d322ace9f5e5257f02c32c895eacc10e4b59bebab41918aed7b66b3669a"} Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.567514 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" event={"ID":"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98","Type":"ContainerStarted","Data":"e247a797cf007e4baa3d65b21cface4683168422ec0bec2f754f806a1b887169"} Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.570207 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" event={"ID":"92f1d8fa-69cf-49c3-a616-82a185ff8dd5","Type":"ContainerStarted","Data":"1708c9a77068dfec18ed8730dc47b4dcc63fcd4eb60fd0dcaa6ea3fe29af7859"} Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.669226 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f88c9"] Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.685493 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg"] Nov 25 20:40:58 crc kubenswrapper[4983]: W1125 20:40:58.718329 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d3d657c_e179_43c7_abca_c37f8396d1cd.slice/crio-f84a03225190cadc3acca43000eadf647fc15ede152f9702704c1fc43d75a2d4 WatchSource:0}: Error finding container f84a03225190cadc3acca43000eadf647fc15ede152f9702704c1fc43d75a2d4: Status 404 returned error can't find the container with id f84a03225190cadc3acca43000eadf647fc15ede152f9702704c1fc43d75a2d4 Nov 25 20:40:58 crc kubenswrapper[4983]: I1125 20:40:58.879416 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd"] Nov 25 20:40:58 crc kubenswrapper[4983]: E1125 20:40:58.911212 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptd2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-mhjtj_openstack-operators(64141c1d-799a-4d72-aa99-e54975052879): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 20:40:58 crc kubenswrapper[4983]: E1125 20:40:58.912712 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" podUID="64141c1d-799a-4d72-aa99-e54975052879" Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.607776 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" event={"ID":"afff7723-36e3-42ae-9fac-9f8fdb86d839","Type":"ContainerStarted","Data":"9592feffbfcdb7be9b3e19c3a4a5ddee1f33b87b1d6917be2e5b8103c0b057e1"} Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.755990 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerStarted","Data":"008b13266f643deb81f8b41a3984d80ef9128e634260e9b2080a6431dd4580c1"} Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.756050 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" event={"ID":"badb10c7-4c8c-42c4-b481-221377fa7255","Type":"ContainerStarted","Data":"b325d87e8c2ba3da5523a71d4dfe14afb4b9bb4ca62a42e706a8dbb1ef803d0b"} Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.756096 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.756108 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" event={"ID":"0d3d657c-e179-43c7-abca-c37f8396d1cd","Type":"ContainerStarted","Data":"f84a03225190cadc3acca43000eadf647fc15ede152f9702704c1fc43d75a2d4"} Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.756120 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" event={"ID":"64141c1d-799a-4d72-aa99-e54975052879","Type":"ContainerStarted","Data":"43db46b0b3b02b2d4e825e80e3b8ab609c79d422bd430339cf81f4ce44095b64"} Nov 25 20:40:59 crc kubenswrapper[4983]: E1125 20:40:59.757479 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" podUID="64141c1d-799a-4d72-aa99-e54975052879" Nov 25 20:40:59 crc kubenswrapper[4983]: I1125 20:40:59.817821 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" event={"ID":"d7302bdd-d74f-4d95-a354-42fcd52bf22e","Type":"ContainerStarted","Data":"3925847c4c4b73204357af3b98c257a3a464f1b2a6414365a8a0055ec0eb5c11"} Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.481746 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmnq6"] Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.486106 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.498942 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmnq6"] Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.613773 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-utilities\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.613888 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-catalog-content\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.613949 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ptg\" (UniqueName: \"kubernetes.io/projected/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-kube-api-access-88ptg\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.726426 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88ptg\" (UniqueName: \"kubernetes.io/projected/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-kube-api-access-88ptg\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.726494 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-utilities\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.726581 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-catalog-content\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.727260 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-catalog-content\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.730153 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-utilities\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.761985 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88ptg\" (UniqueName: \"kubernetes.io/projected/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-kube-api-access-88ptg\") pod \"redhat-operators-nmnq6\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.820616 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.832379 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" event={"ID":"4743af06-44e2-438a-82b7-bf32b0f5ca03","Type":"ContainerStarted","Data":"9e72db3f175c2b26ec4dd93e058d670a457ffd9f924adffc38a3452ac6dd8275"} Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.839789 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88c9" event={"ID":"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c","Type":"ContainerStarted","Data":"dc45675159dc900761512ea3878ddb3e45ecf592e0f51a344b9c7d3484e1bec4"} Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.856543 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" event={"ID":"72f1d28e-26ff-43d3-bd93-54c21d9cdd70","Type":"ContainerStarted","Data":"2689d7a466b399d9262e61b5e10344c8ac51ee7582650191db77c17c16878761"} Nov 25 20:41:00 crc kubenswrapper[4983]: I1125 20:41:00.863472 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" event={"ID":"f32095da-1fdc-4d52-b082-98b39652cdc6","Type":"ContainerStarted","Data":"ff0bf719db9f4b692ae46bc4c214446ef89b7d929025c9b26f7a279a205127d9"} Nov 25 20:41:00 crc kubenswrapper[4983]: E1125 20:41:00.869381 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" podUID="64141c1d-799a-4d72-aa99-e54975052879" Nov 25 20:41:01 crc kubenswrapper[4983]: I1125 20:41:01.911984 4983 generic.go:334] "Generic (PLEG): container finished" podID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerID="6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61" exitCode=0 Nov 25 20:41:01 crc kubenswrapper[4983]: I1125 20:41:01.912331 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88c9" event={"ID":"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c","Type":"ContainerDied","Data":"6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61"} Nov 25 20:41:01 crc kubenswrapper[4983]: I1125 20:41:01.947408 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" event={"ID":"f32095da-1fdc-4d52-b082-98b39652cdc6","Type":"ContainerStarted","Data":"75e7c1267f1210f5dbccd40298f1b4a84918450298e182c51176721003d9c049"} Nov 25 20:41:01 crc kubenswrapper[4983]: I1125 20:41:01.948839 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:41:01 crc kubenswrapper[4983]: I1125 20:41:01.987439 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" podStartSLOduration=19.987411614 podStartE2EDuration="19.987411614s" podCreationTimestamp="2025-11-25 20:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:41:01.984288741 +0000 UTC m=+843.096822133" watchObservedRunningTime="2025-11-25 20:41:01.987411614 +0000 UTC m=+843.099945006" Nov 25 20:41:08 crc kubenswrapper[4983]: I1125 20:41:08.130715 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:41:09 crc kubenswrapper[4983]: I1125 20:41:09.927988 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:41:09 crc kubenswrapper[4983]: I1125 20:41:09.928395 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:41:09 crc kubenswrapper[4983]: I1125 20:41:09.928464 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:41:09 crc kubenswrapper[4983]: I1125 20:41:09.929220 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"332f27d6dcaee6d6f56ec3302fd09a3529205e5c94a5a306755d9476fe03353d"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:41:09 crc kubenswrapper[4983]: I1125 20:41:09.929276 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://332f27d6dcaee6d6f56ec3302fd09a3529205e5c94a5a306755d9476fe03353d" gracePeriod=600 Nov 25 20:41:12 crc kubenswrapper[4983]: I1125 20:41:12.036959 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="332f27d6dcaee6d6f56ec3302fd09a3529205e5c94a5a306755d9476fe03353d" exitCode=0 Nov 25 20:41:12 crc kubenswrapper[4983]: I1125 20:41:12.037057 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"332f27d6dcaee6d6f56ec3302fd09a3529205e5c94a5a306755d9476fe03353d"} Nov 25 20:41:12 crc kubenswrapper[4983]: I1125 20:41:12.037629 4983 scope.go:117] "RemoveContainer" containerID="7306555a4508b1828e5cf4831dc81aad7a61440dcfa7cbd1e1c973af6958d2b0" Nov 25 20:41:12 crc kubenswrapper[4983]: I1125 20:41:12.258733 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.386770 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.386812 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.386987 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-br6r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b64f4fb85-nf6tq_openstack-operators(1ec6aefb-824e-4248-ac00-c1d0b526edc6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.387331 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ncdt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5d499bf58b-f8bh4_openstack-operators(2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.388185 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" podUID="1ec6aefb-824e-4248-ac00-c1d0b526edc6" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.389262 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" podUID="2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98" Nov 25 20:41:13 crc kubenswrapper[4983]: I1125 20:41:13.838540 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmnq6"] Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.991888 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa" Nov 25 20:41:13 crc kubenswrapper[4983]: E1125 20:41:13.992669 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpqw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-lr7wt_openstack-operators(ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:41:14 crc kubenswrapper[4983]: I1125 20:41:14.051583 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:41:14 crc kubenswrapper[4983]: I1125 20:41:14.051876 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:41:14 crc kubenswrapper[4983]: I1125 20:41:14.058965 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:41:14 crc kubenswrapper[4983]: I1125 20:41:14.059218 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:41:14 crc kubenswrapper[4983]: E1125 20:41:14.579860 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 25 20:41:14 crc kubenswrapper[4983]: E1125 20:41:14.580080 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgpwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-bwf7d_openstack-operators(ff284fea-7792-40e1-8ede-f52412a6c014): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:41:14 crc kubenswrapper[4983]: E1125 20:41:14.581241 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podUID="ff284fea-7792-40e1-8ede-f52412a6c014" Nov 25 20:41:14 crc kubenswrapper[4983]: W1125 20:41:14.602348 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88c4bc44_3aa0_466f_a9e0_eb9db4c2390c.slice/crio-5299e594bc13e919d1a5112780e469520f41ba9d0a6eef9670048ecbc2dd5000 WatchSource:0}: Error finding container 5299e594bc13e919d1a5112780e469520f41ba9d0a6eef9670048ecbc2dd5000: Status 404 returned error can't find the container with id 5299e594bc13e919d1a5112780e469520f41ba9d0a6eef9670048ecbc2dd5000 Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.091468 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" event={"ID":"badb10c7-4c8c-42c4-b481-221377fa7255","Type":"ContainerStarted","Data":"48de7507863af5386decd789b603aa1624cbff1ab4615f387e64f27b720ffb82"} Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.092522 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.097939 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"02a7a7ce01bacff8c2eff18d797a1189b8fa10fb78c41ac31562d8f18df21be8"} Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.098048 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.116496 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" podStartSLOduration=2.985731638 podStartE2EDuration="34.116473591s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.513927546 +0000 UTC m=+824.626460938" lastFinishedPulling="2025-11-25 20:41:14.644669509 +0000 UTC m=+855.757202891" observedRunningTime="2025-11-25 20:41:15.111209531 +0000 UTC m=+856.223742923" watchObservedRunningTime="2025-11-25 20:41:15.116473591 +0000 UTC m=+856.229006973" Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.121016 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerStarted","Data":"5299e594bc13e919d1a5112780e469520f41ba9d0a6eef9670048ecbc2dd5000"} Nov 25 20:41:15 crc kubenswrapper[4983]: I1125 20:41:15.127348 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" event={"ID":"4743af06-44e2-438a-82b7-bf32b0f5ca03","Type":"ContainerStarted","Data":"85db8e24ee4320e5d133fc2ba6087f768cb0cd5c42aed98b4791d4dc9ad25df5"} Nov 25 20:41:15 crc kubenswrapper[4983]: E1125 20:41:15.327005 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" podUID="9d7c78e4-4890-4527-9db4-131842750615" Nov 25 20:41:15 crc kubenswrapper[4983]: E1125 20:41:15.483006 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" Nov 25 20:41:15 crc kubenswrapper[4983]: E1125 20:41:15.793986 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" podUID="e5edd26f-9ffb-4be8-86c1-99d32e812816" Nov 25 20:41:15 crc kubenswrapper[4983]: E1125 20:41:15.853992 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" podUID="00a7db78-81a7-481d-a20e-135c60e139e3" Nov 25 20:41:16 crc kubenswrapper[4983]: E1125 20:41:16.085060 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podUID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.141526 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" event={"ID":"1e439ca1-98f3-4650-96da-1e4c1b2da37e","Type":"ContainerStarted","Data":"5ef7631387b47665b07c4873cfee1b9d2f606c285a9e4c6bafb3305ca8cfe8c6"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.145273 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" event={"ID":"5b14316c-9639-4934-a5e9-5381d2797ef5","Type":"ContainerStarted","Data":"ce3bb525bce7355f782d0164aa0dcec2c15378d6b3aaffc8bbf1521842c8c9ae"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.147192 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" event={"ID":"da827172-6e3a-42a7-814c-cdfcc18d48d6","Type":"ContainerStarted","Data":"56a5341e550b730f002ce314e2bb8eba5ce3c9eb1c4a43ca1a21f6a94467068e"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.148635 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.150082 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" event={"ID":"0d3d657c-e179-43c7-abca-c37f8396d1cd","Type":"ContainerStarted","Data":"d1a8e350a61ce7e9dc9dbf72ea6c3efcf9e48faf3b4f1af458dc6c5aa273ecdf"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.152420 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.161492 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" event={"ID":"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98","Type":"ContainerStarted","Data":"7d8e63bc4c70a43d4c417160e841b793c326d1050c2a0989ac4f3158ee8ce893"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.167374 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" event={"ID":"d7302bdd-d74f-4d95-a354-42fcd52bf22e","Type":"ContainerStarted","Data":"ca9b3e73a9d18fb6ed192cd357912b573d4ca296c42bd39944c16c5ea0260f21"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.167739 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.170388 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.180640 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" podStartSLOduration=3.4457910050000002 podStartE2EDuration="35.180615877s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.933174185 +0000 UTC m=+824.045707577" lastFinishedPulling="2025-11-25 20:41:14.667999057 +0000 UTC m=+855.780532449" observedRunningTime="2025-11-25 20:41:16.178968524 +0000 UTC m=+857.291501916" watchObservedRunningTime="2025-11-25 20:41:16.180615877 +0000 UTC m=+857.293149269" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.189069 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" event={"ID":"00a7db78-81a7-481d-a20e-135c60e139e3","Type":"ContainerStarted","Data":"cc066b7027f27821e8981487f533f90fbc4213e736356d379ab0f6ea55258eb9"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.216986 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" event={"ID":"48b3567f-5b1a-4f14-891c-775c05e2d768","Type":"ContainerStarted","Data":"a4513d66d1474acbe372a01747f3153bdc68c7865399dbff89102cd3e26c257a"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.218761 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.228143 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.234225 4983 generic.go:334] "Generic (PLEG): container finished" podID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerID="7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2" exitCode=0 Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.234354 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerDied","Data":"7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.252505 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerStarted","Data":"800e256cb8f1989e7c49b1a3d3eec24c91626673241d35efd810c9eb810f8c9a"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.253544 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.254762 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" podStartSLOduration=4.084375323 podStartE2EDuration="35.25474605s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.487874822 +0000 UTC m=+824.600408214" lastFinishedPulling="2025-11-25 20:41:14.658245549 +0000 UTC m=+855.770778941" observedRunningTime="2025-11-25 20:41:16.251400051 +0000 UTC m=+857.363933433" watchObservedRunningTime="2025-11-25 20:41:16.25474605 +0000 UTC m=+857.367279442" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.264587 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.264841 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerStarted","Data":"85345b0ce70302e6ada301fc94fc9ceba5a03fb31a39dff1b3c536c19a2ef241"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.283606 4983 generic.go:334] "Generic (PLEG): container finished" podID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerID="cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1" exitCode=0 Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.283666 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88c9" event={"ID":"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c","Type":"ContainerDied","Data":"cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.320872 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" event={"ID":"92f1d8fa-69cf-49c3-a616-82a185ff8dd5","Type":"ContainerStarted","Data":"cd34871f36044cd0d655f91ff25d6536286e88c8dc28a39743a12e20015bfdfe"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.321852 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.337151 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" podStartSLOduration=20.86978248 podStartE2EDuration="35.337122471s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.518522538 +0000 UTC m=+824.631055930" lastFinishedPulling="2025-11-25 20:40:57.985862529 +0000 UTC m=+839.098395921" observedRunningTime="2025-11-25 20:41:16.286716927 +0000 UTC m=+857.399250329" watchObservedRunningTime="2025-11-25 20:41:16.337122471 +0000 UTC m=+857.449655863" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.351943 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" event={"ID":"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042","Type":"ContainerStarted","Data":"37358255c3d630611a298f11ba20a5fc0e8cdd11945eec1c738cc7ff2f3316e1"} Nov 25 20:41:16 crc kubenswrapper[4983]: E1125 20:41:16.356278 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podUID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.361952 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.377256 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" event={"ID":"e5edd26f-9ffb-4be8-86c1-99d32e812816","Type":"ContainerStarted","Data":"89895c2a3990fe2e7fa126fac2a8e61894a5ddcf931851e40a13c39a457776cc"} Nov 25 20:41:16 crc kubenswrapper[4983]: E1125 20:41:16.396831 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" podUID="cf765330-a0f9-4603-a92b-4aec8feaeafb" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.397021 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" event={"ID":"1ec6aefb-824e-4248-ac00-c1d0b526edc6","Type":"ContainerStarted","Data":"75a98ca77844e5e30d48cf57a848161292f004f3d2f195d0dfc9692fefc10670"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.440717 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" event={"ID":"9d7c78e4-4890-4527-9db4-131842750615","Type":"ContainerStarted","Data":"c9fdb5d08a8bb275393086c14f864161325553c3232a1cc8dc4d7f80418c1b22"} Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.550238 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" podStartSLOduration=3.794493089 podStartE2EDuration="35.550221334s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.937913901 +0000 UTC m=+824.050447293" lastFinishedPulling="2025-11-25 20:41:14.693642136 +0000 UTC m=+855.806175538" observedRunningTime="2025-11-25 20:41:16.547084811 +0000 UTC m=+857.659618203" watchObservedRunningTime="2025-11-25 20:41:16.550221334 +0000 UTC m=+857.662754726" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.661084 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" podStartSLOduration=3.9761456710000003 podStartE2EDuration="35.661061188s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.959766883 +0000 UTC m=+824.072300275" lastFinishedPulling="2025-11-25 20:41:14.6446824 +0000 UTC m=+855.757215792" observedRunningTime="2025-11-25 20:41:16.612935224 +0000 UTC m=+857.725468616" watchObservedRunningTime="2025-11-25 20:41:16.661061188 +0000 UTC m=+857.773594580" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.707351 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" podStartSLOduration=4.680759985 podStartE2EDuration="35.707321203s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.619625471 +0000 UTC m=+824.732158863" lastFinishedPulling="2025-11-25 20:41:14.646186669 +0000 UTC m=+855.758720081" observedRunningTime="2025-11-25 20:41:16.703848711 +0000 UTC m=+857.816382103" watchObservedRunningTime="2025-11-25 20:41:16.707321203 +0000 UTC m=+857.819854595" Nov 25 20:41:16 crc kubenswrapper[4983]: I1125 20:41:16.798118 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" podStartSLOduration=20.130256736 podStartE2EDuration="35.798098017s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.313079157 +0000 UTC m=+823.425612549" lastFinishedPulling="2025-11-25 20:40:57.980920438 +0000 UTC m=+839.093453830" observedRunningTime="2025-11-25 20:41:16.797202123 +0000 UTC m=+857.909735515" watchObservedRunningTime="2025-11-25 20:41:16.798098017 +0000 UTC m=+857.910631409" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.450133 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" event={"ID":"00a7db78-81a7-481d-a20e-135c60e139e3","Type":"ContainerStarted","Data":"c3fb31f1b338ebf2c6933c9cd12a13b813bde606aa2d327b8486df26a9a1159e"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.451052 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.451662 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" event={"ID":"cf765330-a0f9-4603-a92b-4aec8feaeafb","Type":"ContainerStarted","Data":"7510c5e2baf96b55651508c599702ea45111a6a6038731f60a2c8e97983de8e3"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.453867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" event={"ID":"afff7723-36e3-42ae-9fac-9f8fdb86d839","Type":"ContainerStarted","Data":"c23417f136d49861b5bef30a59b9df7827e0a6b5fdf58a8c4907e8e83a466490"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.454024 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.456798 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" event={"ID":"0d3d657c-e179-43c7-abca-c37f8396d1cd","Type":"ContainerStarted","Data":"24c820957c6ace057dda324c665e8631c8e04c1c3b6702d234c40052ecf49c23"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.456972 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.457027 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.458990 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" event={"ID":"e5edd26f-9ffb-4be8-86c1-99d32e812816","Type":"ContainerStarted","Data":"1a01b9904aaa3423a2eaf3d09ebb9c232e52fa9cdd9a996360b637a149ebe722"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.459825 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.462286 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" event={"ID":"9d7c78e4-4890-4527-9db4-131842750615","Type":"ContainerStarted","Data":"a51cd56a7a6390cdbb05b64926ccaaf27335f93ef9186bb6b69d703b3cdead49"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.462813 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.464842 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerStarted","Data":"29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.467497 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" event={"ID":"4743af06-44e2-438a-82b7-bf32b0f5ca03","Type":"ContainerStarted","Data":"9a66702cef90a3bcaa8f1577766307e5a3751a5b3685a8c98132f1be23c285a3"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.468149 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.472129 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" event={"ID":"1e439ca1-98f3-4650-96da-1e4c1b2da37e","Type":"ContainerStarted","Data":"dae765ceeef3b8415e5e3908f698d731efbb7b7cc1a46a5d08ad20f36d408c35"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.472263 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.475857 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" podStartSLOduration=2.405968115 podStartE2EDuration="36.475830612s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.628412497 +0000 UTC m=+823.740945879" lastFinishedPulling="2025-11-25 20:41:16.698274984 +0000 UTC m=+857.810808376" observedRunningTime="2025-11-25 20:41:17.473216693 +0000 UTC m=+858.585750085" watchObservedRunningTime="2025-11-25 20:41:17.475830612 +0000 UTC m=+858.588364004" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.476278 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" event={"ID":"64141c1d-799a-4d72-aa99-e54975052879","Type":"ContainerStarted","Data":"b6d1cb0fb521da7fda4db245ea4e7c270341efe21b9a37d575098d0745f09986"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.478519 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88c9" event={"ID":"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c","Type":"ContainerStarted","Data":"541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.484377 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" event={"ID":"72f1d28e-26ff-43d3-bd93-54c21d9cdd70","Type":"ContainerStarted","Data":"99fef29c6859a73737ed27581a211f768dd8c005d6c79da4d022d78daed9b80a"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.484591 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.488166 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerStarted","Data":"faf022606aab866ff05a368881d0a696a1aaed4c2a6196b2d87a2b326d9574df"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.488233 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.490600 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" event={"ID":"5b14316c-9639-4934-a5e9-5381d2797ef5","Type":"ContainerStarted","Data":"687428b5780ea67f85a43b04a192522564b05b0698b7cd4c2ea59d97112b5e8c"} Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.490642 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.500086 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" podStartSLOduration=2.473336921 podStartE2EDuration="36.500065873s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.959810344 +0000 UTC m=+824.072343736" lastFinishedPulling="2025-11-25 20:41:16.986539296 +0000 UTC m=+858.099072688" observedRunningTime="2025-11-25 20:41:17.497457224 +0000 UTC m=+858.609990616" watchObservedRunningTime="2025-11-25 20:41:17.500065873 +0000 UTC m=+858.612599265" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.529707 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" podStartSLOduration=21.736156435 podStartE2EDuration="36.529683468s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:59.830067108 +0000 UTC m=+840.942600500" lastFinishedPulling="2025-11-25 20:41:14.623594141 +0000 UTC m=+855.736127533" observedRunningTime="2025-11-25 20:41:17.524508161 +0000 UTC m=+858.637041553" watchObservedRunningTime="2025-11-25 20:41:17.529683468 +0000 UTC m=+858.642216860" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.551613 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" podStartSLOduration=21.293209628 podStartE2EDuration="36.551595788s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:58.723645214 +0000 UTC m=+839.836178606" lastFinishedPulling="2025-11-25 20:41:13.982031374 +0000 UTC m=+855.094564766" observedRunningTime="2025-11-25 20:41:17.546259647 +0000 UTC m=+858.658793039" watchObservedRunningTime="2025-11-25 20:41:17.551595788 +0000 UTC m=+858.664129180" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.573297 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" podStartSLOduration=4.960052241 podStartE2EDuration="36.573279112s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.065646804 +0000 UTC m=+824.178180196" lastFinishedPulling="2025-11-25 20:41:14.678873655 +0000 UTC m=+855.791407067" observedRunningTime="2025-11-25 20:41:17.566856482 +0000 UTC m=+858.679389874" watchObservedRunningTime="2025-11-25 20:41:17.573279112 +0000 UTC m=+858.685812504" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.592627 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" podStartSLOduration=5.535768254 podStartE2EDuration="36.592606384s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.561218285 +0000 UTC m=+824.673751677" lastFinishedPulling="2025-11-25 20:41:14.618056415 +0000 UTC m=+855.730589807" observedRunningTime="2025-11-25 20:41:17.588732941 +0000 UTC m=+858.701266343" watchObservedRunningTime="2025-11-25 20:41:17.592606384 +0000 UTC m=+858.705139766" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.640319 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" podStartSLOduration=3.169587467 podStartE2EDuration="36.640298237s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.517333066 +0000 UTC m=+824.629866458" lastFinishedPulling="2025-11-25 20:41:16.988043836 +0000 UTC m=+858.100577228" observedRunningTime="2025-11-25 20:41:17.63248646 +0000 UTC m=+858.745019852" watchObservedRunningTime="2025-11-25 20:41:17.640298237 +0000 UTC m=+858.752831619" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.671351 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podStartSLOduration=5.62919629 podStartE2EDuration="36.671329908s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.572206998 +0000 UTC m=+824.684740390" lastFinishedPulling="2025-11-25 20:41:14.614340616 +0000 UTC m=+855.726874008" observedRunningTime="2025-11-25 20:41:17.668825842 +0000 UTC m=+858.781359234" watchObservedRunningTime="2025-11-25 20:41:17.671329908 +0000 UTC m=+858.783863300" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.693091 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f88c9" podStartSLOduration=8.555279398 podStartE2EDuration="22.693070294s" podCreationTimestamp="2025-11-25 20:40:55 +0000 UTC" firstStartedPulling="2025-11-25 20:41:02.648051416 +0000 UTC m=+843.760584808" lastFinishedPulling="2025-11-25 20:41:16.785842312 +0000 UTC m=+857.898375704" observedRunningTime="2025-11-25 20:41:17.690464165 +0000 UTC m=+858.802997557" watchObservedRunningTime="2025-11-25 20:41:17.693070294 +0000 UTC m=+858.805603686" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.723078 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" podStartSLOduration=4.882311031 podStartE2EDuration="36.723054198s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.93261857 +0000 UTC m=+824.045151952" lastFinishedPulling="2025-11-25 20:41:14.773361707 +0000 UTC m=+855.885895119" observedRunningTime="2025-11-25 20:41:17.717989814 +0000 UTC m=+858.830523206" watchObservedRunningTime="2025-11-25 20:41:17.723054198 +0000 UTC m=+858.835587590" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.775491 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" podStartSLOduration=5.500307134 podStartE2EDuration="36.775468316s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.484234405 +0000 UTC m=+824.596767797" lastFinishedPulling="2025-11-25 20:41:14.759395587 +0000 UTC m=+855.871928979" observedRunningTime="2025-11-25 20:41:17.754041708 +0000 UTC m=+858.866575100" watchObservedRunningTime="2025-11-25 20:41:17.775468316 +0000 UTC m=+858.888001708" Nov 25 20:41:17 crc kubenswrapper[4983]: I1125 20:41:17.778890 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podStartSLOduration=3.290633817 podStartE2EDuration="36.778877796s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.486955837 +0000 UTC m=+824.599489229" lastFinishedPulling="2025-11-25 20:41:16.975199816 +0000 UTC m=+858.087733208" observedRunningTime="2025-11-25 20:41:17.773768641 +0000 UTC m=+858.886302033" watchObservedRunningTime="2025-11-25 20:41:17.778877796 +0000 UTC m=+858.891411188" Nov 25 20:41:18 crc kubenswrapper[4983]: I1125 20:41:18.499269 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" event={"ID":"cf765330-a0f9-4603-a92b-4aec8feaeafb","Type":"ContainerStarted","Data":"6db4f7718946bdb494f6d3f9c8048577aea80b535d5af70bbebcfeff8ee9d42f"} Nov 25 20:41:18 crc kubenswrapper[4983]: I1125 20:41:18.499717 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:41:18 crc kubenswrapper[4983]: I1125 20:41:18.501282 4983 generic.go:334] "Generic (PLEG): container finished" podID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerID="29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa" exitCode=0 Nov 25 20:41:18 crc kubenswrapper[4983]: I1125 20:41:18.501451 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerDied","Data":"29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa"} Nov 25 20:41:18 crc kubenswrapper[4983]: I1125 20:41:18.502635 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:41:18 crc kubenswrapper[4983]: I1125 20:41:18.523749 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" podStartSLOduration=2.232246943 podStartE2EDuration="37.523713117s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:42.614772964 +0000 UTC m=+823.727306356" lastFinishedPulling="2025-11-25 20:41:17.906239138 +0000 UTC m=+859.018772530" observedRunningTime="2025-11-25 20:41:18.521309443 +0000 UTC m=+859.633842835" watchObservedRunningTime="2025-11-25 20:41:18.523713117 +0000 UTC m=+859.636246509" Nov 25 20:41:19 crc kubenswrapper[4983]: I1125 20:41:19.540150 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerStarted","Data":"41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652"} Nov 25 20:41:19 crc kubenswrapper[4983]: I1125 20:41:19.546507 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:41:19 crc kubenswrapper[4983]: I1125 20:41:19.576089 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmnq6" podStartSLOduration=16.878323521 podStartE2EDuration="19.576059751s" podCreationTimestamp="2025-11-25 20:41:00 +0000 UTC" firstStartedPulling="2025-11-25 20:41:16.239569408 +0000 UTC m=+857.352102800" lastFinishedPulling="2025-11-25 20:41:18.937305638 +0000 UTC m=+860.049839030" observedRunningTime="2025-11-25 20:41:19.567059052 +0000 UTC m=+860.679592454" watchObservedRunningTime="2025-11-25 20:41:19.576059751 +0000 UTC m=+860.688593143" Nov 25 20:41:20 crc kubenswrapper[4983]: I1125 20:41:20.821692 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:20 crc kubenswrapper[4983]: I1125 20:41:20.822225 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:21 crc kubenswrapper[4983]: I1125 20:41:21.693143 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:41:21 crc kubenswrapper[4983]: I1125 20:41:21.885245 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmnq6" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="registry-server" probeResult="failure" output=< Nov 25 20:41:21 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 20:41:21 crc kubenswrapper[4983]: > Nov 25 20:41:22 crc kubenswrapper[4983]: I1125 20:41:22.062919 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:41:22 crc kubenswrapper[4983]: I1125 20:41:22.152103 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:41:22 crc kubenswrapper[4983]: I1125 20:41:22.165849 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:41:22 crc kubenswrapper[4983]: I1125 20:41:22.297364 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:41:22 crc kubenswrapper[4983]: I1125 20:41:22.424876 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:41:26 crc kubenswrapper[4983]: I1125 20:41:26.038372 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:41:26 crc kubenswrapper[4983]: I1125 20:41:26.039146 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:41:26 crc kubenswrapper[4983]: I1125 20:41:26.107690 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:41:26 crc kubenswrapper[4983]: I1125 20:41:26.681266 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:41:26 crc kubenswrapper[4983]: I1125 20:41:26.767566 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f88c9"] Nov 25 20:41:27 crc kubenswrapper[4983]: I1125 20:41:27.501825 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:41:27 crc kubenswrapper[4983]: E1125 20:41:27.608613 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podUID="ff284fea-7792-40e1-8ede-f52412a6c014" Nov 25 20:41:28 crc kubenswrapper[4983]: I1125 20:41:28.616759 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f88c9" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="registry-server" containerID="cri-o://541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71" gracePeriod=2 Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.092987 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.152000 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-catalog-content\") pod \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.152065 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-utilities\") pod \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.152097 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-477xl\" (UniqueName: \"kubernetes.io/projected/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-kube-api-access-477xl\") pod \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\" (UID: \"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c\") " Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.153812 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-utilities" (OuterVolumeSpecName: "utilities") pod "e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" (UID: "e3401dc1-8ab8-4aaf-b784-a4bd824cf74c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.175502 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-kube-api-access-477xl" (OuterVolumeSpecName: "kube-api-access-477xl") pod "e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" (UID: "e3401dc1-8ab8-4aaf-b784-a4bd824cf74c"). InnerVolumeSpecName "kube-api-access-477xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.207127 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" (UID: "e3401dc1-8ab8-4aaf-b784-a4bd824cf74c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.254358 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.254396 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.254409 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-477xl\" (UniqueName: \"kubernetes.io/projected/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c-kube-api-access-477xl\") on node \"crc\" DevicePath \"\"" Nov 25 20:41:29 crc kubenswrapper[4983]: E1125 20:41:29.610944 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podUID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.627883 4983 generic.go:334] "Generic (PLEG): container finished" podID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerID="541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71" exitCode=0 Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.627925 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88c9" event={"ID":"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c","Type":"ContainerDied","Data":"541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71"} Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.627952 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f88c9" event={"ID":"e3401dc1-8ab8-4aaf-b784-a4bd824cf74c","Type":"ContainerDied","Data":"dc45675159dc900761512ea3878ddb3e45ecf592e0f51a344b9c7d3484e1bec4"} Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.628006 4983 scope.go:117] "RemoveContainer" containerID="541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.628052 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f88c9" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.665157 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f88c9"] Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.671607 4983 scope.go:117] "RemoveContainer" containerID="cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.671738 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f88c9"] Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.694790 4983 scope.go:117] "RemoveContainer" containerID="6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.729355 4983 scope.go:117] "RemoveContainer" containerID="541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71" Nov 25 20:41:29 crc kubenswrapper[4983]: E1125 20:41:29.730116 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71\": container with ID starting with 541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71 not found: ID does not exist" containerID="541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.730164 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71"} err="failed to get container status \"541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71\": rpc error: code = NotFound desc = could not find container \"541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71\": container with ID starting with 541d9194d538c8dd0b2971ae87d8eb34995c834686196e0611755459bd568d71 not found: ID does not exist" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.730196 4983 scope.go:117] "RemoveContainer" containerID="cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1" Nov 25 20:41:29 crc kubenswrapper[4983]: E1125 20:41:29.730696 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1\": container with ID starting with cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1 not found: ID does not exist" containerID="cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.730731 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1"} err="failed to get container status \"cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1\": rpc error: code = NotFound desc = could not find container \"cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1\": container with ID starting with cd4283ba1e9f1d3c36fd5c8d7ae6c3b6ac7184891f0e6eb7cfe9e732967584b1 not found: ID does not exist" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.730750 4983 scope.go:117] "RemoveContainer" containerID="6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61" Nov 25 20:41:29 crc kubenswrapper[4983]: E1125 20:41:29.731033 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61\": container with ID starting with 6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61 not found: ID does not exist" containerID="6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61" Nov 25 20:41:29 crc kubenswrapper[4983]: I1125 20:41:29.731059 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61"} err="failed to get container status \"6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61\": rpc error: code = NotFound desc = could not find container \"6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61\": container with ID starting with 6d56181e2483c57a7585896d360887297641ed32a01a18a967b39d615e14fa61 not found: ID does not exist" Nov 25 20:41:30 crc kubenswrapper[4983]: I1125 20:41:30.892475 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:30 crc kubenswrapper[4983]: I1125 20:41:30.961287 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:31 crc kubenswrapper[4983]: I1125 20:41:31.623611 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" path="/var/lib/kubelet/pods/e3401dc1-8ab8-4aaf-b784-a4bd824cf74c/volumes" Nov 25 20:41:31 crc kubenswrapper[4983]: I1125 20:41:31.675819 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:41:31 crc kubenswrapper[4983]: I1125 20:41:31.951727 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmnq6"] Nov 25 20:41:32 crc kubenswrapper[4983]: I1125 20:41:32.662848 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmnq6" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="registry-server" containerID="cri-o://41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652" gracePeriod=2 Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.173096 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.324410 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-catalog-content\") pod \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.324617 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88ptg\" (UniqueName: \"kubernetes.io/projected/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-kube-api-access-88ptg\") pod \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.324748 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-utilities\") pod \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\" (UID: \"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c\") " Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.325593 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-utilities" (OuterVolumeSpecName: "utilities") pod "88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" (UID: "88c4bc44-3aa0-466f-a9e0-eb9db4c2390c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.354858 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-kube-api-access-88ptg" (OuterVolumeSpecName: "kube-api-access-88ptg") pod "88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" (UID: "88c4bc44-3aa0-466f-a9e0-eb9db4c2390c"). InnerVolumeSpecName "kube-api-access-88ptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.418704 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" (UID: "88c4bc44-3aa0-466f-a9e0-eb9db4c2390c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.426310 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88ptg\" (UniqueName: \"kubernetes.io/projected/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-kube-api-access-88ptg\") on node \"crc\" DevicePath \"\"" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.426355 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.426366 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.678060 4983 generic.go:334] "Generic (PLEG): container finished" podID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerID="41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652" exitCode=0 Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.678236 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerDied","Data":"41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652"} Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.678268 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmnq6" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.678663 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmnq6" event={"ID":"88c4bc44-3aa0-466f-a9e0-eb9db4c2390c","Type":"ContainerDied","Data":"5299e594bc13e919d1a5112780e469520f41ba9d0a6eef9670048ecbc2dd5000"} Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.678713 4983 scope.go:117] "RemoveContainer" containerID="41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.717964 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmnq6"] Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.719523 4983 scope.go:117] "RemoveContainer" containerID="29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.728366 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmnq6"] Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.760216 4983 scope.go:117] "RemoveContainer" containerID="7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.785326 4983 scope.go:117] "RemoveContainer" containerID="41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652" Nov 25 20:41:33 crc kubenswrapper[4983]: E1125 20:41:33.786296 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652\": container with ID starting with 41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652 not found: ID does not exist" containerID="41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.786419 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652"} err="failed to get container status \"41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652\": rpc error: code = NotFound desc = could not find container \"41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652\": container with ID starting with 41b6a575d4f2e5f78ab1712d60d37d8471c063cb7fcfec7bf4c736cbe8fea652 not found: ID does not exist" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.786528 4983 scope.go:117] "RemoveContainer" containerID="29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa" Nov 25 20:41:33 crc kubenswrapper[4983]: E1125 20:41:33.787118 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa\": container with ID starting with 29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa not found: ID does not exist" containerID="29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.787164 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa"} err="failed to get container status \"29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa\": rpc error: code = NotFound desc = could not find container \"29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa\": container with ID starting with 29c04379c595ac3d5c592e076e4c0234110539c453085d49d18d314a513331fa not found: ID does not exist" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.787197 4983 scope.go:117] "RemoveContainer" containerID="7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2" Nov 25 20:41:33 crc kubenswrapper[4983]: E1125 20:41:33.787660 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2\": container with ID starting with 7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2 not found: ID does not exist" containerID="7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2" Nov 25 20:41:33 crc kubenswrapper[4983]: I1125 20:41:33.787704 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2"} err="failed to get container status \"7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2\": rpc error: code = NotFound desc = could not find container \"7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2\": container with ID starting with 7accebf509a349da44ab292c59edb31fcb0032b8a5495f4a949ed805832939e2 not found: ID does not exist" Nov 25 20:41:35 crc kubenswrapper[4983]: I1125 20:41:35.621156 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" path="/var/lib/kubelet/pods/88c4bc44-3aa0-466f-a9e0-eb9db4c2390c/volumes" Nov 25 20:41:40 crc kubenswrapper[4983]: I1125 20:41:40.754950 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" event={"ID":"ff284fea-7792-40e1-8ede-f52412a6c014","Type":"ContainerStarted","Data":"e566187b30e83938355b3a1c52c572bdd1b1cfc3f9b6770bdb9fd69f8e5862bf"} Nov 25 20:41:40 crc kubenswrapper[4983]: I1125 20:41:40.792024 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podStartSLOduration=2.271677607 podStartE2EDuration="58.79199327s" podCreationTimestamp="2025-11-25 20:40:42 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.563195428 +0000 UTC m=+824.675728820" lastFinishedPulling="2025-11-25 20:41:40.083511101 +0000 UTC m=+881.196044483" observedRunningTime="2025-11-25 20:41:40.784944493 +0000 UTC m=+881.897477925" watchObservedRunningTime="2025-11-25 20:41:40.79199327 +0000 UTC m=+881.904526692" Nov 25 20:41:42 crc kubenswrapper[4983]: I1125 20:41:42.780360 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" event={"ID":"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042","Type":"ContainerStarted","Data":"054daea6e75eb5e2bdebf3c62e6aeb967c39fff4d427308f9fa784e913be78c0"} Nov 25 20:41:42 crc kubenswrapper[4983]: I1125 20:41:42.781641 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:41:42 crc kubenswrapper[4983]: I1125 20:41:42.823581 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" podStartSLOduration=3.308483078 podStartE2EDuration="1m1.823534091s" podCreationTimestamp="2025-11-25 20:40:41 +0000 UTC" firstStartedPulling="2025-11-25 20:40:43.601110448 +0000 UTC m=+824.713643840" lastFinishedPulling="2025-11-25 20:41:42.116161421 +0000 UTC m=+883.228694853" observedRunningTime="2025-11-25 20:41:42.814733908 +0000 UTC m=+883.927267340" watchObservedRunningTime="2025-11-25 20:41:42.823534091 +0000 UTC m=+883.936067493" Nov 25 20:41:52 crc kubenswrapper[4983]: I1125 20:41:52.326777 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.878536 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7trl9"] Nov 25 20:42:07 crc kubenswrapper[4983]: E1125 20:42:07.879771 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="extract-utilities" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.879792 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="extract-utilities" Nov 25 20:42:07 crc kubenswrapper[4983]: E1125 20:42:07.879839 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="extract-content" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.879849 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="extract-content" Nov 25 20:42:07 crc kubenswrapper[4983]: E1125 20:42:07.879895 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="registry-server" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.879904 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="registry-server" Nov 25 20:42:07 crc kubenswrapper[4983]: E1125 20:42:07.879926 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="extract-utilities" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.879935 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="extract-utilities" Nov 25 20:42:07 crc kubenswrapper[4983]: E1125 20:42:07.879955 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="registry-server" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.879964 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="registry-server" Nov 25 20:42:07 crc kubenswrapper[4983]: E1125 20:42:07.879982 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="extract-content" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.879991 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="extract-content" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.880179 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3401dc1-8ab8-4aaf-b784-a4bd824cf74c" containerName="registry-server" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.880199 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c4bc44-3aa0-466f-a9e0-eb9db4c2390c" containerName="registry-server" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.885179 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.890604 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lnbzd" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.890714 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.891204 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.891974 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7trl9"] Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.894393 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.997836 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhxrf"] Nov 25 20:42:07 crc kubenswrapper[4983]: I1125 20:42:07.999067 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.001812 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.031688 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkmrk\" (UniqueName: \"kubernetes.io/projected/4424f5a7-8ecf-4708-b049-c6af38b28804-kube-api-access-pkmrk\") pod \"dnsmasq-dns-675f4bcbfc-7trl9\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.031766 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4424f5a7-8ecf-4708-b049-c6af38b28804-config\") pod \"dnsmasq-dns-675f4bcbfc-7trl9\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.032974 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhxrf"] Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.132919 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk472\" (UniqueName: \"kubernetes.io/projected/8d005dc9-64cc-403c-a7e4-4b06463767c1-kube-api-access-vk472\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.133261 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-config\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.133318 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkmrk\" (UniqueName: \"kubernetes.io/projected/4424f5a7-8ecf-4708-b049-c6af38b28804-kube-api-access-pkmrk\") pod \"dnsmasq-dns-675f4bcbfc-7trl9\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.133343 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.133362 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4424f5a7-8ecf-4708-b049-c6af38b28804-config\") pod \"dnsmasq-dns-675f4bcbfc-7trl9\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.134391 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4424f5a7-8ecf-4708-b049-c6af38b28804-config\") pod \"dnsmasq-dns-675f4bcbfc-7trl9\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.158374 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkmrk\" (UniqueName: \"kubernetes.io/projected/4424f5a7-8ecf-4708-b049-c6af38b28804-kube-api-access-pkmrk\") pod \"dnsmasq-dns-675f4bcbfc-7trl9\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.234735 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.234931 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk472\" (UniqueName: \"kubernetes.io/projected/8d005dc9-64cc-403c-a7e4-4b06463767c1-kube-api-access-vk472\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.235004 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-config\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.235681 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.236050 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-config\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.247830 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.253487 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk472\" (UniqueName: \"kubernetes.io/projected/8d005dc9-64cc-403c-a7e4-4b06463767c1-kube-api-access-vk472\") pod \"dnsmasq-dns-78dd6ddcc-nhxrf\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.317117 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.707263 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7trl9"] Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.712672 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:42:08 crc kubenswrapper[4983]: I1125 20:42:08.795024 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhxrf"] Nov 25 20:42:08 crc kubenswrapper[4983]: W1125 20:42:08.796489 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d005dc9_64cc_403c_a7e4_4b06463767c1.slice/crio-4ff98d689f2ddb65dee232c534bfbb595eb539f69ac8a05ecb571828f116a02d WatchSource:0}: Error finding container 4ff98d689f2ddb65dee232c534bfbb595eb539f69ac8a05ecb571828f116a02d: Status 404 returned error can't find the container with id 4ff98d689f2ddb65dee232c534bfbb595eb539f69ac8a05ecb571828f116a02d Nov 25 20:42:09 crc kubenswrapper[4983]: I1125 20:42:09.061188 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" event={"ID":"4424f5a7-8ecf-4708-b049-c6af38b28804","Type":"ContainerStarted","Data":"a3b410296dacf06e858f02548edf6b319dd26281850bbf2b74df1b12a9404c6e"} Nov 25 20:42:09 crc kubenswrapper[4983]: I1125 20:42:09.062483 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" event={"ID":"8d005dc9-64cc-403c-a7e4-4b06463767c1","Type":"ContainerStarted","Data":"4ff98d689f2ddb65dee232c534bfbb595eb539f69ac8a05ecb571828f116a02d"} Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.309672 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7trl9"] Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.336389 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vj48c"] Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.337727 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.360410 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vj48c"] Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.483167 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.483242 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jsn\" (UniqueName: \"kubernetes.io/projected/6bbf6f55-78b8-41e6-9e55-be10664ab74e-kube-api-access-h8jsn\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.483286 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-config\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.584751 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.584816 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jsn\" (UniqueName: \"kubernetes.io/projected/6bbf6f55-78b8-41e6-9e55-be10664ab74e-kube-api-access-h8jsn\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.584843 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-config\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.585899 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-config\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.585923 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.617657 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jsn\" (UniqueName: \"kubernetes.io/projected/6bbf6f55-78b8-41e6-9e55-be10664ab74e-kube-api-access-h8jsn\") pod \"dnsmasq-dns-666b6646f7-vj48c\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.668602 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.703304 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhxrf"] Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.726395 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k69m2"] Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.744066 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.746108 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k69m2"] Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.892046 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-config\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.892504 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.892783 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfsf\" (UniqueName: \"kubernetes.io/projected/323e207f-794c-4f9e-8a02-b567237b08e6-kube-api-access-7wfsf\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.994592 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.994708 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfsf\" (UniqueName: \"kubernetes.io/projected/323e207f-794c-4f9e-8a02-b567237b08e6-kube-api-access-7wfsf\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.994769 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-config\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.995687 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:11 crc kubenswrapper[4983]: I1125 20:42:11.996236 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-config\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.013366 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfsf\" (UniqueName: \"kubernetes.io/projected/323e207f-794c-4f9e-8a02-b567237b08e6-kube-api-access-7wfsf\") pod \"dnsmasq-dns-57d769cc4f-k69m2\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.061505 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.509905 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.514272 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.519735 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.523768 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.523978 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.524079 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.524110 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67ndd" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.524222 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.524969 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.533056 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705283 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-config-data\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705388 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvz5\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-kube-api-access-vbvz5\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705448 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705516 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705651 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705779 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705831 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.705971 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.706164 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.706244 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.706300 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.807735 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.807848 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.807900 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.807942 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-config-data\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.807982 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvz5\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-kube-api-access-vbvz5\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808020 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808325 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808446 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808536 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808628 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808671 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808740 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808905 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.808960 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-config-data\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.809417 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.812001 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.812128 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.812468 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.812761 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.814018 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.822257 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.832382 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvz5\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-kube-api-access-vbvz5\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.850795 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " pod="openstack/rabbitmq-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.872658 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.873968 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.882503 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.882762 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.882894 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7mg2v" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.883021 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.884174 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.884222 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.884669 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 20:42:12 crc kubenswrapper[4983]: I1125 20:42:12.898489 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012087 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnhp\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-kube-api-access-nwnhp\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012484 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012503 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012521 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012572 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012636 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7aa78f0-48cd-4845-8a44-52fb63183dff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012673 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012689 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012712 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012733 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7aa78f0-48cd-4845-8a44-52fb63183dff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.012765 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113758 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnhp\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-kube-api-access-nwnhp\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113815 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113831 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113848 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113870 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113888 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7aa78f0-48cd-4845-8a44-52fb63183dff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113921 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113937 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113960 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.113976 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7aa78f0-48cd-4845-8a44-52fb63183dff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.114008 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.115126 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.115633 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.115888 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.116101 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.116752 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.118813 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.122201 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.125015 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7aa78f0-48cd-4845-8a44-52fb63183dff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.128320 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7aa78f0-48cd-4845-8a44-52fb63183dff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.137988 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.141316 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.143467 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnhp\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-kube-api-access-nwnhp\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.151398 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:13 crc kubenswrapper[4983]: I1125 20:42:13.242063 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.113316 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.115588 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.119181 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.123202 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.123519 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-28kzp" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.124145 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.125212 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.148606 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231146 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231238 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231297 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231337 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj56k\" (UniqueName: \"kubernetes.io/projected/ca63c157-60df-45de-854f-03989f565e8f-kube-api-access-dj56k\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231363 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca63c157-60df-45de-854f-03989f565e8f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231395 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca63c157-60df-45de-854f-03989f565e8f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231444 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.231500 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca63c157-60df-45de-854f-03989f565e8f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333154 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333498 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333617 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333707 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj56k\" (UniqueName: \"kubernetes.io/projected/ca63c157-60df-45de-854f-03989f565e8f-kube-api-access-dj56k\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333790 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca63c157-60df-45de-854f-03989f565e8f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333859 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.333875 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca63c157-60df-45de-854f-03989f565e8f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.334084 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.334211 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca63c157-60df-45de-854f-03989f565e8f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.334621 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-config-data-default\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.334672 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ca63c157-60df-45de-854f-03989f565e8f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.335106 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-kolla-config\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.335253 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca63c157-60df-45de-854f-03989f565e8f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.340922 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca63c157-60df-45de-854f-03989f565e8f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.351758 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca63c157-60df-45de-854f-03989f565e8f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.351914 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj56k\" (UniqueName: \"kubernetes.io/projected/ca63c157-60df-45de-854f-03989f565e8f-kube-api-access-dj56k\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.369529 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"ca63c157-60df-45de-854f-03989f565e8f\") " pod="openstack/openstack-galera-0" Nov 25 20:42:14 crc kubenswrapper[4983]: I1125 20:42:14.441759 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.526930 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.529192 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.533859 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jgf4r" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.535146 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.535788 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.535914 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.539962 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657349 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13cd3da7-02fa-42c2-a62a-527df23e92b1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657415 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657498 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cd3da7-02fa-42c2-a62a-527df23e92b1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657519 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13cd3da7-02fa-42c2-a62a-527df23e92b1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657573 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657794 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657842 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.657906 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rt5\" (UniqueName: \"kubernetes.io/projected/13cd3da7-02fa-42c2-a62a-527df23e92b1-kube-api-access-q6rt5\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.759971 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cd3da7-02fa-42c2-a62a-527df23e92b1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760025 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13cd3da7-02fa-42c2-a62a-527df23e92b1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760063 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760105 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760122 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760146 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6rt5\" (UniqueName: \"kubernetes.io/projected/13cd3da7-02fa-42c2-a62a-527df23e92b1-kube-api-access-q6rt5\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760193 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13cd3da7-02fa-42c2-a62a-527df23e92b1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760221 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760549 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.760989 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/13cd3da7-02fa-42c2-a62a-527df23e92b1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.761346 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.761712 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.761792 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13cd3da7-02fa-42c2-a62a-527df23e92b1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.765916 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/13cd3da7-02fa-42c2-a62a-527df23e92b1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.777977 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13cd3da7-02fa-42c2-a62a-527df23e92b1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.779511 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.784051 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6rt5\" (UniqueName: \"kubernetes.io/projected/13cd3da7-02fa-42c2-a62a-527df23e92b1-kube-api-access-q6rt5\") pod \"openstack-cell1-galera-0\" (UID: \"13cd3da7-02fa-42c2-a62a-527df23e92b1\") " pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.814300 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.818465 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.823708 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.823831 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.828298 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.828488 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lfvsl" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.861994 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba42cf7-cc02-4214-a4e5-c20d987aed64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.862060 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba42cf7-cc02-4214-a4e5-c20d987aed64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.862084 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvj5n\" (UniqueName: \"kubernetes.io/projected/6ba42cf7-cc02-4214-a4e5-c20d987aed64-kube-api-access-xvj5n\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.862113 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ba42cf7-cc02-4214-a4e5-c20d987aed64-config-data\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.862171 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6ba42cf7-cc02-4214-a4e5-c20d987aed64-kolla-config\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.872949 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.967588 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba42cf7-cc02-4214-a4e5-c20d987aed64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.968032 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba42cf7-cc02-4214-a4e5-c20d987aed64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.968054 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvj5n\" (UniqueName: \"kubernetes.io/projected/6ba42cf7-cc02-4214-a4e5-c20d987aed64-kube-api-access-xvj5n\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.968079 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ba42cf7-cc02-4214-a4e5-c20d987aed64-config-data\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.968118 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6ba42cf7-cc02-4214-a4e5-c20d987aed64-kolla-config\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.969017 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6ba42cf7-cc02-4214-a4e5-c20d987aed64-kolla-config\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.973253 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ba42cf7-cc02-4214-a4e5-c20d987aed64-config-data\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.975296 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba42cf7-cc02-4214-a4e5-c20d987aed64-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.975878 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba42cf7-cc02-4214-a4e5-c20d987aed64-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:15 crc kubenswrapper[4983]: I1125 20:42:15.990160 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvj5n\" (UniqueName: \"kubernetes.io/projected/6ba42cf7-cc02-4214-a4e5-c20d987aed64-kube-api-access-xvj5n\") pod \"memcached-0\" (UID: \"6ba42cf7-cc02-4214-a4e5-c20d987aed64\") " pod="openstack/memcached-0" Nov 25 20:42:16 crc kubenswrapper[4983]: I1125 20:42:16.153169 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.455105 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.456352 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.461626 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hs4zc" Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.476911 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.595791 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hll7s\" (UniqueName: \"kubernetes.io/projected/e9ce7970-111c-43db-81e7-1ee52d40718b-kube-api-access-hll7s\") pod \"kube-state-metrics-0\" (UID: \"e9ce7970-111c-43db-81e7-1ee52d40718b\") " pod="openstack/kube-state-metrics-0" Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.698699 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hll7s\" (UniqueName: \"kubernetes.io/projected/e9ce7970-111c-43db-81e7-1ee52d40718b-kube-api-access-hll7s\") pod \"kube-state-metrics-0\" (UID: \"e9ce7970-111c-43db-81e7-1ee52d40718b\") " pod="openstack/kube-state-metrics-0" Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.720808 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hll7s\" (UniqueName: \"kubernetes.io/projected/e9ce7970-111c-43db-81e7-1ee52d40718b-kube-api-access-hll7s\") pod \"kube-state-metrics-0\" (UID: \"e9ce7970-111c-43db-81e7-1ee52d40718b\") " pod="openstack/kube-state-metrics-0" Nov 25 20:42:17 crc kubenswrapper[4983]: I1125 20:42:17.775509 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 20:42:20 crc kubenswrapper[4983]: I1125 20:42:20.211446 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.389201 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fgn5f"] Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.390541 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.392870 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.398079 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.408665 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-lp2vq" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.414229 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fgn5f"] Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.443100 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-47bp7"] Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.445036 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.457979 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-47bp7"] Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486194 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-log-ovn\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486264 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-run-ovn\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486324 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-log\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486359 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7w77\" (UniqueName: \"kubernetes.io/projected/cd8b1052-9050-4771-8be4-3138d9c54d62-kube-api-access-d7w77\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486385 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd8b1052-9050-4771-8be4-3138d9c54d62-scripts\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486406 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8b1052-9050-4771-8be4-3138d9c54d62-combined-ca-bundle\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486450 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-lib\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486488 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8b1052-9050-4771-8be4-3138d9c54d62-ovn-controller-tls-certs\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486507 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-scripts\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486525 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-run\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486603 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-run\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486644 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkx4\" (UniqueName: \"kubernetes.io/projected/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-kube-api-access-rtkx4\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.486774 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-etc-ovs\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588011 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-log-ovn\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588064 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-run-ovn\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588087 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-log\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588115 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7w77\" (UniqueName: \"kubernetes.io/projected/cd8b1052-9050-4771-8be4-3138d9c54d62-kube-api-access-d7w77\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588136 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd8b1052-9050-4771-8be4-3138d9c54d62-scripts\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588156 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8b1052-9050-4771-8be4-3138d9c54d62-combined-ca-bundle\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588174 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-lib\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588222 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8b1052-9050-4771-8be4-3138d9c54d62-ovn-controller-tls-certs\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588238 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-scripts\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588256 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-run\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588272 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-run\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588297 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkx4\" (UniqueName: \"kubernetes.io/projected/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-kube-api-access-rtkx4\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588335 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-etc-ovs\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.588919 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-etc-ovs\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.589179 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-lib\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.589199 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-log-ovn\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.589356 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-run-ovn\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.589469 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-log\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.590599 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cd8b1052-9050-4771-8be4-3138d9c54d62-var-run\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.590722 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-var-run\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.591838 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-scripts\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.593699 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cd8b1052-9050-4771-8be4-3138d9c54d62-scripts\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.608318 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd8b1052-9050-4771-8be4-3138d9c54d62-ovn-controller-tls-certs\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.608439 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd8b1052-9050-4771-8be4-3138d9c54d62-combined-ca-bundle\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.614816 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7w77\" (UniqueName: \"kubernetes.io/projected/cd8b1052-9050-4771-8be4-3138d9c54d62-kube-api-access-d7w77\") pod \"ovn-controller-fgn5f\" (UID: \"cd8b1052-9050-4771-8be4-3138d9c54d62\") " pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.617860 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkx4\" (UniqueName: \"kubernetes.io/projected/1ab2fd6b-f417-4b0e-b1ac-d374d64b7712-kube-api-access-rtkx4\") pod \"ovn-controller-ovs-47bp7\" (UID: \"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712\") " pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.710257 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.771362 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.877945 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.880859 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.885305 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.890772 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.891128 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.891291 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.894013 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9v9qs" Nov 25 20:42:21 crc kubenswrapper[4983]: I1125 20:42:21.895848 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.998776 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/255fbb78-ee7b-4e1f-bd48-d260792d9be4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.998857 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/255fbb78-ee7b-4e1f-bd48-d260792d9be4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.998956 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.999004 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255fbb78-ee7b-4e1f-bd48-d260792d9be4-config\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.999050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.999079 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.999116 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:21.999156 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqwl\" (UniqueName: \"kubernetes.io/projected/255fbb78-ee7b-4e1f-bd48-d260792d9be4-kube-api-access-hkqwl\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.102667 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/255fbb78-ee7b-4e1f-bd48-d260792d9be4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.102844 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.102912 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255fbb78-ee7b-4e1f-bd48-d260792d9be4-config\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.102963 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.102987 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.103020 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.103054 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqwl\" (UniqueName: \"kubernetes.io/projected/255fbb78-ee7b-4e1f-bd48-d260792d9be4-kube-api-access-hkqwl\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.103108 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/255fbb78-ee7b-4e1f-bd48-d260792d9be4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.103343 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/255fbb78-ee7b-4e1f-bd48-d260792d9be4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.104021 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.104896 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/255fbb78-ee7b-4e1f-bd48-d260792d9be4-config\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.105453 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/255fbb78-ee7b-4e1f-bd48-d260792d9be4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.107089 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.107702 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.117456 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255fbb78-ee7b-4e1f-bd48-d260792d9be4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.126045 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqwl\" (UniqueName: \"kubernetes.io/projected/255fbb78-ee7b-4e1f-bd48-d260792d9be4-kube-api-access-hkqwl\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.130844 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"255fbb78-ee7b-4e1f-bd48-d260792d9be4\") " pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:22 crc kubenswrapper[4983]: I1125 20:42:22.216738 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.422293 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.424134 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.426449 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.426763 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.426800 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tmg8d" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.426982 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.439035 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.557858 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/410c54ac-f4e0-4c9f-873e-939b19eb303b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558549 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnrr\" (UniqueName: \"kubernetes.io/projected/410c54ac-f4e0-4c9f-873e-939b19eb303b-kube-api-access-2nnrr\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558611 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558652 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410c54ac-f4e0-4c9f-873e-939b19eb303b-config\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558700 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/410c54ac-f4e0-4c9f-873e-939b19eb303b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558770 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558802 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.558830 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.659858 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/410c54ac-f4e0-4c9f-873e-939b19eb303b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.659908 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnrr\" (UniqueName: \"kubernetes.io/projected/410c54ac-f4e0-4c9f-873e-939b19eb303b-kube-api-access-2nnrr\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.659952 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.659983 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410c54ac-f4e0-4c9f-873e-939b19eb303b-config\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.660015 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/410c54ac-f4e0-4c9f-873e-939b19eb303b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.660063 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.660085 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.660106 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.660715 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/410c54ac-f4e0-4c9f-873e-939b19eb303b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.660781 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.661202 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/410c54ac-f4e0-4c9f-873e-939b19eb303b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.666019 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/410c54ac-f4e0-4c9f-873e-939b19eb303b-config\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.673495 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.673811 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.678827 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410c54ac-f4e0-4c9f-873e-939b19eb303b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.681438 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnrr\" (UniqueName: \"kubernetes.io/projected/410c54ac-f4e0-4c9f-873e-939b19eb303b-kube-api-access-2nnrr\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.695466 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"410c54ac-f4e0-4c9f-873e-939b19eb303b\") " pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.754214 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:24 crc kubenswrapper[4983]: I1125 20:42:24.868250 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:42:25 crc kubenswrapper[4983]: I1125 20:42:25.212481 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90","Type":"ContainerStarted","Data":"2d0fbdc9a8c1d37164b3aa2791005237f3c8fb1ccafa599c75c2ea03389ee36b"} Nov 25 20:42:25 crc kubenswrapper[4983]: E1125 20:42:25.365754 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 20:42:25 crc kubenswrapper[4983]: E1125 20:42:25.365996 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkmrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7trl9_openstack(4424f5a7-8ecf-4708-b049-c6af38b28804): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:42:25 crc kubenswrapper[4983]: E1125 20:42:25.367252 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" podUID="4424f5a7-8ecf-4708-b049-c6af38b28804" Nov 25 20:42:25 crc kubenswrapper[4983]: E1125 20:42:25.392179 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 20:42:25 crc kubenswrapper[4983]: E1125 20:42:25.392374 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vk472,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-nhxrf_openstack(8d005dc9-64cc-403c-a7e4-4b06463767c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:42:25 crc kubenswrapper[4983]: E1125 20:42:25.394654 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" podUID="8d005dc9-64cc-403c-a7e4-4b06463767c1" Nov 25 20:42:25 crc kubenswrapper[4983]: I1125 20:42:25.980713 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vj48c"] Nov 25 20:42:25 crc kubenswrapper[4983]: I1125 20:42:25.981206 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 20:42:25 crc kubenswrapper[4983]: I1125 20:42:25.986370 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 20:42:25 crc kubenswrapper[4983]: I1125 20:42:25.992346 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k69m2"] Nov 25 20:42:26 crc kubenswrapper[4983]: W1125 20:42:26.010883 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bbf6f55_78b8_41e6_9e55_be10664ab74e.slice/crio-7e394cfb32f714217cbdf8a2a65aea19c651be04b16831e8c4ae14d32be251ec WatchSource:0}: Error finding container 7e394cfb32f714217cbdf8a2a65aea19c651be04b16831e8c4ae14d32be251ec: Status 404 returned error can't find the container with id 7e394cfb32f714217cbdf8a2a65aea19c651be04b16831e8c4ae14d32be251ec Nov 25 20:42:26 crc kubenswrapper[4983]: W1125 20:42:26.018245 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba42cf7_cc02_4214_a4e5_c20d987aed64.slice/crio-be21045bad280745dec827f0d85d177ebeb4013348ff1d53a1fd1c3b474bf18a WatchSource:0}: Error finding container be21045bad280745dec827f0d85d177ebeb4013348ff1d53a1fd1c3b474bf18a: Status 404 returned error can't find the container with id be21045bad280745dec827f0d85d177ebeb4013348ff1d53a1fd1c3b474bf18a Nov 25 20:42:26 crc kubenswrapper[4983]: W1125 20:42:26.021857 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323e207f_794c_4f9e_8a02_b567237b08e6.slice/crio-d4248b75e459ddbada65085cc16a2d2e1b05e29da655565c8fdfc2010ccdb092 WatchSource:0}: Error finding container d4248b75e459ddbada65085cc16a2d2e1b05e29da655565c8fdfc2010ccdb092: Status 404 returned error can't find the container with id d4248b75e459ddbada65085cc16a2d2e1b05e29da655565c8fdfc2010ccdb092 Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.173602 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.250867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a7aa78f0-48cd-4845-8a44-52fb63183dff","Type":"ContainerStarted","Data":"3da00ffc871161710459784659b31ad739bed64b27d8484e0d5644fba9de9db1"} Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.252484 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" event={"ID":"6bbf6f55-78b8-41e6-9e55-be10664ab74e","Type":"ContainerStarted","Data":"7e394cfb32f714217cbdf8a2a65aea19c651be04b16831e8c4ae14d32be251ec"} Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.253831 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9ce7970-111c-43db-81e7-1ee52d40718b","Type":"ContainerStarted","Data":"ddb4ce3186574ec79bee76d6abe6b18330dd71632ac7b047fdddee9dab2eb2ea"} Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.255143 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6ba42cf7-cc02-4214-a4e5-c20d987aed64","Type":"ContainerStarted","Data":"be21045bad280745dec827f0d85d177ebeb4013348ff1d53a1fd1c3b474bf18a"} Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.257346 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" event={"ID":"323e207f-794c-4f9e-8a02-b567237b08e6","Type":"ContainerStarted","Data":"d4248b75e459ddbada65085cc16a2d2e1b05e29da655565c8fdfc2010ccdb092"} Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.258870 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"13cd3da7-02fa-42c2-a62a-527df23e92b1","Type":"ContainerStarted","Data":"a3d8a7f5edf183d0126a9fe98ec0ad506c2028f8e262cda79f0937cc14eb64d3"} Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.377420 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-47bp7"] Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.386962 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fgn5f"] Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.391407 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 20:42:26 crc kubenswrapper[4983]: W1125 20:42:26.407074 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ab2fd6b_f417_4b0e_b1ac_d374d64b7712.slice/crio-b878012526bc6ca981f3ab75ea32b9228e9e02c1c72c7a809f3f32458d88b360 WatchSource:0}: Error finding container b878012526bc6ca981f3ab75ea32b9228e9e02c1c72c7a809f3f32458d88b360: Status 404 returned error can't find the container with id b878012526bc6ca981f3ab75ea32b9228e9e02c1c72c7a809f3f32458d88b360 Nov 25 20:42:26 crc kubenswrapper[4983]: W1125 20:42:26.416839 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca63c157_60df_45de_854f_03989f565e8f.slice/crio-2b73056c977b21df383babaac4f554f779f904fc6dcce0f40339f23baa586520 WatchSource:0}: Error finding container 2b73056c977b21df383babaac4f554f779f904fc6dcce0f40339f23baa586520: Status 404 returned error can't find the container with id 2b73056c977b21df383babaac4f554f779f904fc6dcce0f40339f23baa586520 Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.530070 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.773781 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.776401 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.958454 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4424f5a7-8ecf-4708-b049-c6af38b28804-config\") pod \"4424f5a7-8ecf-4708-b049-c6af38b28804\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.958526 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-dns-svc\") pod \"8d005dc9-64cc-403c-a7e4-4b06463767c1\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.958682 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-config\") pod \"8d005dc9-64cc-403c-a7e4-4b06463767c1\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.958717 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk472\" (UniqueName: \"kubernetes.io/projected/8d005dc9-64cc-403c-a7e4-4b06463767c1-kube-api-access-vk472\") pod \"8d005dc9-64cc-403c-a7e4-4b06463767c1\" (UID: \"8d005dc9-64cc-403c-a7e4-4b06463767c1\") " Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.958775 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkmrk\" (UniqueName: \"kubernetes.io/projected/4424f5a7-8ecf-4708-b049-c6af38b28804-kube-api-access-pkmrk\") pod \"4424f5a7-8ecf-4708-b049-c6af38b28804\" (UID: \"4424f5a7-8ecf-4708-b049-c6af38b28804\") " Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.959402 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d005dc9-64cc-403c-a7e4-4b06463767c1" (UID: "8d005dc9-64cc-403c-a7e4-4b06463767c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.959792 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-config" (OuterVolumeSpecName: "config") pod "8d005dc9-64cc-403c-a7e4-4b06463767c1" (UID: "8d005dc9-64cc-403c-a7e4-4b06463767c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.962828 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4424f5a7-8ecf-4708-b049-c6af38b28804-config" (OuterVolumeSpecName: "config") pod "4424f5a7-8ecf-4708-b049-c6af38b28804" (UID: "4424f5a7-8ecf-4708-b049-c6af38b28804"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.964799 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d005dc9-64cc-403c-a7e4-4b06463767c1-kube-api-access-vk472" (OuterVolumeSpecName: "kube-api-access-vk472") pod "8d005dc9-64cc-403c-a7e4-4b06463767c1" (UID: "8d005dc9-64cc-403c-a7e4-4b06463767c1"). InnerVolumeSpecName "kube-api-access-vk472". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:26 crc kubenswrapper[4983]: I1125 20:42:26.967479 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4424f5a7-8ecf-4708-b049-c6af38b28804-kube-api-access-pkmrk" (OuterVolumeSpecName: "kube-api-access-pkmrk") pod "4424f5a7-8ecf-4708-b049-c6af38b28804" (UID: "4424f5a7-8ecf-4708-b049-c6af38b28804"). InnerVolumeSpecName "kube-api-access-pkmrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.060461 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.060493 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d005dc9-64cc-403c-a7e4-4b06463767c1-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.060505 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk472\" (UniqueName: \"kubernetes.io/projected/8d005dc9-64cc-403c-a7e4-4b06463767c1-kube-api-access-vk472\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.060517 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkmrk\" (UniqueName: \"kubernetes.io/projected/4424f5a7-8ecf-4708-b049-c6af38b28804-kube-api-access-pkmrk\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.060526 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4424f5a7-8ecf-4708-b049-c6af38b28804-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.268468 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fgn5f" event={"ID":"cd8b1052-9050-4771-8be4-3138d9c54d62","Type":"ContainerStarted","Data":"3d6c2d13793b2b8b21ad0cb21a9173f6bffd75aefe2e2f694de5ed12bf72c1f6"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.273306 4983 generic.go:334] "Generic (PLEG): container finished" podID="323e207f-794c-4f9e-8a02-b567237b08e6" containerID="15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f" exitCode=0 Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.273355 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" event={"ID":"323e207f-794c-4f9e-8a02-b567237b08e6","Type":"ContainerDied","Data":"15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.276856 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" event={"ID":"4424f5a7-8ecf-4708-b049-c6af38b28804","Type":"ContainerDied","Data":"a3b410296dacf06e858f02548edf6b319dd26281850bbf2b74df1b12a9404c6e"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.277100 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7trl9" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.278707 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.278720 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nhxrf" event={"ID":"8d005dc9-64cc-403c-a7e4-4b06463767c1","Type":"ContainerDied","Data":"4ff98d689f2ddb65dee232c534bfbb595eb539f69ac8a05ecb571828f116a02d"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.280064 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca63c157-60df-45de-854f-03989f565e8f","Type":"ContainerStarted","Data":"2b73056c977b21df383babaac4f554f779f904fc6dcce0f40339f23baa586520"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.281615 4983 generic.go:334] "Generic (PLEG): container finished" podID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerID="6248f92777a0d4cbb4d57e08c0903b67aa348b64fe45a84210f2e209107ddbf6" exitCode=0 Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.281664 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" event={"ID":"6bbf6f55-78b8-41e6-9e55-be10664ab74e","Type":"ContainerDied","Data":"6248f92777a0d4cbb4d57e08c0903b67aa348b64fe45a84210f2e209107ddbf6"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.284950 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-47bp7" event={"ID":"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712","Type":"ContainerStarted","Data":"b878012526bc6ca981f3ab75ea32b9228e9e02c1c72c7a809f3f32458d88b360"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.286283 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"255fbb78-ee7b-4e1f-bd48-d260792d9be4","Type":"ContainerStarted","Data":"dd283b0728e13183afbc25b565225700a4f50d41878f451a421ef649da65eec4"} Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.404385 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7trl9"] Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.410773 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7trl9"] Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.458139 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhxrf"] Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.465213 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nhxrf"] Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.480347 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.617365 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4424f5a7-8ecf-4708-b049-c6af38b28804" path="/var/lib/kubelet/pods/4424f5a7-8ecf-4708-b049-c6af38b28804/volumes" Nov 25 20:42:27 crc kubenswrapper[4983]: I1125 20:42:27.617763 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d005dc9-64cc-403c-a7e4-4b06463767c1" path="/var/lib/kubelet/pods/8d005dc9-64cc-403c-a7e4-4b06463767c1/volumes" Nov 25 20:42:28 crc kubenswrapper[4983]: W1125 20:42:28.523137 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod410c54ac_f4e0_4c9f_873e_939b19eb303b.slice/crio-d0eb994c3e76a5caeee26d7a5080a711efef34f57b87bf0970be12b1a2492747 WatchSource:0}: Error finding container d0eb994c3e76a5caeee26d7a5080a711efef34f57b87bf0970be12b1a2492747: Status 404 returned error can't find the container with id d0eb994c3e76a5caeee26d7a5080a711efef34f57b87bf0970be12b1a2492747 Nov 25 20:42:29 crc kubenswrapper[4983]: I1125 20:42:29.308304 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"410c54ac-f4e0-4c9f-873e-939b19eb303b","Type":"ContainerStarted","Data":"d0eb994c3e76a5caeee26d7a5080a711efef34f57b87bf0970be12b1a2492747"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.365035 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca63c157-60df-45de-854f-03989f565e8f","Type":"ContainerStarted","Data":"44d1b37a30949ee7fd1b2db32e2294e880ba812e75e3136aaad8fec2f01978fd"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.369631 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9ce7970-111c-43db-81e7-1ee52d40718b","Type":"ContainerStarted","Data":"030e5276954d403bcf167c102210d492a52de96aed5bafbf27bd6e9fb2a09633"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.370136 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.372959 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ab2fd6b-f417-4b0e-b1ac-d374d64b7712" containerID="21675ced684bb2f622563b08ef5ba514a3204509b57520a9b097b3a5886d2e71" exitCode=0 Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.373021 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-47bp7" event={"ID":"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712","Type":"ContainerDied","Data":"21675ced684bb2f622563b08ef5ba514a3204509b57520a9b097b3a5886d2e71"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.377621 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6ba42cf7-cc02-4214-a4e5-c20d987aed64","Type":"ContainerStarted","Data":"f75eb2b7ee5bc196770975b00f009fd57092c28d9c34661ea69a2fb6b6c0de0b"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.379053 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.380348 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fgn5f" event={"ID":"cd8b1052-9050-4771-8be4-3138d9c54d62","Type":"ContainerStarted","Data":"d08475053790be579b9a812493867e8c9fc5b2ce1c2bbe138dac1571e6b687eb"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.380690 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fgn5f" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.382598 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"410c54ac-f4e0-4c9f-873e-939b19eb303b","Type":"ContainerStarted","Data":"2dd81c58fd1ed20f15860029fdb2a65cd636e691c496d2a809cac70d966764f7"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.400504 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"13cd3da7-02fa-42c2-a62a-527df23e92b1","Type":"ContainerStarted","Data":"4f4da16aeb73bad6f7df576e6b8c947cf45f4b5cbf74e5157343cb5b0904a14b"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.404230 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" event={"ID":"6bbf6f55-78b8-41e6-9e55-be10664ab74e","Type":"ContainerStarted","Data":"00e24018b9831506f247126b75cc66bb4e0f831ff1f1f124eeed1811433d1c91"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.404503 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.406871 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"255fbb78-ee7b-4e1f-bd48-d260792d9be4","Type":"ContainerStarted","Data":"46d8a0f1394cb720c463ac27f258f285d74f5c190dd343f9e300a17ac6671ad7"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.413434 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" event={"ID":"323e207f-794c-4f9e-8a02-b567237b08e6","Type":"ContainerStarted","Data":"e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c"} Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.413684 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.415143 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.1437006 podStartE2EDuration="18.415119917s" podCreationTimestamp="2025-11-25 20:42:17 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.188248771 +0000 UTC m=+927.300782163" lastFinishedPulling="2025-11-25 20:42:34.459668088 +0000 UTC m=+935.572201480" observedRunningTime="2025-11-25 20:42:35.410849774 +0000 UTC m=+936.523383166" watchObservedRunningTime="2025-11-25 20:42:35.415119917 +0000 UTC m=+936.527653309" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.439234 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.101291973 podStartE2EDuration="20.439210654s" podCreationTimestamp="2025-11-25 20:42:15 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.023062187 +0000 UTC m=+927.135595579" lastFinishedPulling="2025-11-25 20:42:33.360980868 +0000 UTC m=+934.473514260" observedRunningTime="2025-11-25 20:42:35.431370547 +0000 UTC m=+936.543903949" watchObservedRunningTime="2025-11-25 20:42:35.439210654 +0000 UTC m=+936.551744046" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.456583 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fgn5f" podStartSLOduration=7.134947895 podStartE2EDuration="14.456542973s" podCreationTimestamp="2025-11-25 20:42:21 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.420340856 +0000 UTC m=+927.532874248" lastFinishedPulling="2025-11-25 20:42:33.741935934 +0000 UTC m=+934.854469326" observedRunningTime="2025-11-25 20:42:35.45340417 +0000 UTC m=+936.565937562" watchObservedRunningTime="2025-11-25 20:42:35.456542973 +0000 UTC m=+936.569076365" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.499026 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" podStartSLOduration=24.002044561 podStartE2EDuration="24.499001328s" podCreationTimestamp="2025-11-25 20:42:11 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.027928566 +0000 UTC m=+927.140461958" lastFinishedPulling="2025-11-25 20:42:26.524885333 +0000 UTC m=+927.637418725" observedRunningTime="2025-11-25 20:42:35.490241806 +0000 UTC m=+936.602775198" watchObservedRunningTime="2025-11-25 20:42:35.499001328 +0000 UTC m=+936.611534710" Nov 25 20:42:35 crc kubenswrapper[4983]: I1125 20:42:35.512008 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" podStartSLOduration=24.013841002 podStartE2EDuration="24.511986831s" podCreationTimestamp="2025-11-25 20:42:11 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.013789321 +0000 UTC m=+927.126322723" lastFinishedPulling="2025-11-25 20:42:26.51193516 +0000 UTC m=+927.624468552" observedRunningTime="2025-11-25 20:42:35.507278877 +0000 UTC m=+936.619812269" watchObservedRunningTime="2025-11-25 20:42:35.511986831 +0000 UTC m=+936.624520223" Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.423224 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a7aa78f0-48cd-4845-8a44-52fb63183dff","Type":"ContainerStarted","Data":"ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227"} Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.427815 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90","Type":"ContainerStarted","Data":"82321d33208ab6fde63d4a7f69525aa9078ba24736057f6bf87559c7b2c6d966"} Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.431278 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-47bp7" event={"ID":"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712","Type":"ContainerStarted","Data":"440aa333f3184a7e4129466ee20dc7981bdcfb873ab21283495050666b5ac2ef"} Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.432104 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.432250 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.432337 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-47bp7" event={"ID":"1ab2fd6b-f417-4b0e-b1ac-d374d64b7712","Type":"ContainerStarted","Data":"dd34710938f8c5db3fac268afd2882510ca3c0f4ee62eb5af7909feb4ea4d613"} Nov 25 20:42:36 crc kubenswrapper[4983]: I1125 20:42:36.500605 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-47bp7" podStartSLOduration=8.240328393 podStartE2EDuration="15.500536686s" podCreationTimestamp="2025-11-25 20:42:21 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.4170922 +0000 UTC m=+927.529625592" lastFinishedPulling="2025-11-25 20:42:33.677300483 +0000 UTC m=+934.789833885" observedRunningTime="2025-11-25 20:42:36.49767378 +0000 UTC m=+937.610207182" watchObservedRunningTime="2025-11-25 20:42:36.500536686 +0000 UTC m=+937.613070078" Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.450623 4983 generic.go:334] "Generic (PLEG): container finished" podID="ca63c157-60df-45de-854f-03989f565e8f" containerID="44d1b37a30949ee7fd1b2db32e2294e880ba812e75e3136aaad8fec2f01978fd" exitCode=0 Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.450742 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca63c157-60df-45de-854f-03989f565e8f","Type":"ContainerDied","Data":"44d1b37a30949ee7fd1b2db32e2294e880ba812e75e3136aaad8fec2f01978fd"} Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.455061 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"255fbb78-ee7b-4e1f-bd48-d260792d9be4","Type":"ContainerStarted","Data":"28e20e8bc42fa67735a425dc6c30ed1f3435e5fd004aa9de899bfd836ea8c40d"} Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.458098 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"410c54ac-f4e0-4c9f-873e-939b19eb303b","Type":"ContainerStarted","Data":"c3891d9212820b9b1476a84a4203d7abc29f4d9b6044b52e473eddc793fe5bf3"} Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.460863 4983 generic.go:334] "Generic (PLEG): container finished" podID="13cd3da7-02fa-42c2-a62a-527df23e92b1" containerID="4f4da16aeb73bad6f7df576e6b8c947cf45f4b5cbf74e5157343cb5b0904a14b" exitCode=0 Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.461070 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"13cd3da7-02fa-42c2-a62a-527df23e92b1","Type":"ContainerDied","Data":"4f4da16aeb73bad6f7df576e6b8c947cf45f4b5cbf74e5157343cb5b0904a14b"} Nov 25 20:42:38 crc kubenswrapper[4983]: I1125 20:42:38.538776 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.382635354 podStartE2EDuration="18.538757163s" podCreationTimestamp="2025-11-25 20:42:20 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.681806438 +0000 UTC m=+927.794339830" lastFinishedPulling="2025-11-25 20:42:37.837928207 +0000 UTC m=+938.950461639" observedRunningTime="2025-11-25 20:42:38.534922452 +0000 UTC m=+939.647455904" watchObservedRunningTime="2025-11-25 20:42:38.538757163 +0000 UTC m=+939.651290565" Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.471506 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"13cd3da7-02fa-42c2-a62a-527df23e92b1","Type":"ContainerStarted","Data":"d89fc15d3949d7b7fd70c9af3c79d88cc967d4e7c7063825c8a2ba1be7d68441"} Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.474722 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ca63c157-60df-45de-854f-03989f565e8f","Type":"ContainerStarted","Data":"d2fd849fdaa39615308728964ff16caa2e028a17b3b40cb5d4da034ac3929105"} Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.504542 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.858353763 podStartE2EDuration="25.504517795s" podCreationTimestamp="2025-11-25 20:42:14 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.031271204 +0000 UTC m=+927.143804596" lastFinishedPulling="2025-11-25 20:42:33.677435236 +0000 UTC m=+934.789968628" observedRunningTime="2025-11-25 20:42:39.501882515 +0000 UTC m=+940.614415927" watchObservedRunningTime="2025-11-25 20:42:39.504517795 +0000 UTC m=+940.617051187" Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.505533 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=7.203256009 podStartE2EDuration="16.505526761s" podCreationTimestamp="2025-11-25 20:42:23 +0000 UTC" firstStartedPulling="2025-11-25 20:42:28.529332077 +0000 UTC m=+929.641865469" lastFinishedPulling="2025-11-25 20:42:37.831602829 +0000 UTC m=+938.944136221" observedRunningTime="2025-11-25 20:42:38.56392413 +0000 UTC m=+939.676457532" watchObservedRunningTime="2025-11-25 20:42:39.505526761 +0000 UTC m=+940.618060153" Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.536650 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.160228537 podStartE2EDuration="26.536616425s" podCreationTimestamp="2025-11-25 20:42:13 +0000 UTC" firstStartedPulling="2025-11-25 20:42:26.424968269 +0000 UTC m=+927.537501651" lastFinishedPulling="2025-11-25 20:42:33.801356147 +0000 UTC m=+934.913889539" observedRunningTime="2025-11-25 20:42:39.526519657 +0000 UTC m=+940.639053039" watchObservedRunningTime="2025-11-25 20:42:39.536616425 +0000 UTC m=+940.649149847" Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.754709 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.754793 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:39 crc kubenswrapper[4983]: I1125 20:42:39.814499 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.218272 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.261214 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.483493 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.519400 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.529048 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.688513 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k69m2"] Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.688860 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" containerName="dnsmasq-dns" containerID="cri-o://e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c" gracePeriod=10 Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.689721 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.781963 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cg64x"] Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.784731 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.789899 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.795818 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cg64x"] Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.860601 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57p6\" (UniqueName: \"kubernetes.io/projected/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-kube-api-access-n57p6\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.860686 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.860752 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-config\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.860786 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.865308 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jxnnh"] Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.868523 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.872508 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.892873 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jxnnh"] Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.963708 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/18caf88a-0da7-4144-9c11-301f0a49f3fb-ovs-rundir\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964089 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57p6\" (UniqueName: \"kubernetes.io/projected/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-kube-api-access-n57p6\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964117 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9krr\" (UniqueName: \"kubernetes.io/projected/18caf88a-0da7-4144-9c11-301f0a49f3fb-kube-api-access-w9krr\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964142 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964175 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18caf88a-0da7-4144-9c11-301f0a49f3fb-combined-ca-bundle\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964214 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-config\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964234 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964259 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/18caf88a-0da7-4144-9c11-301f0a49f3fb-ovn-rundir\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964279 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18caf88a-0da7-4144-9c11-301f0a49f3fb-config\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.964321 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/18caf88a-0da7-4144-9c11-301f0a49f3fb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.965620 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vj48c"] Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.971305 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.972659 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerName="dnsmasq-dns" containerID="cri-o://00e24018b9831506f247126b75cc66bb4e0f831ff1f1f124eeed1811433d1c91" gracePeriod=10 Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.978469 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.978720 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-config\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:40 crc kubenswrapper[4983]: I1125 20:42:40.984460 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.016705 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wkqm"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.016912 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57p6\" (UniqueName: \"kubernetes.io/projected/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-kube-api-access-n57p6\") pod \"dnsmasq-dns-7fd796d7df-cg64x\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.018657 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.021133 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.059441 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wkqm"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.065299 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.068074 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.069861 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18caf88a-0da7-4144-9c11-301f0a49f3fb-config\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070029 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070220 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/18caf88a-0da7-4144-9c11-301f0a49f3fb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070348 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070477 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070587 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/18caf88a-0da7-4144-9c11-301f0a49f3fb-ovs-rundir\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070680 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-config\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070800 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4s4\" (UniqueName: \"kubernetes.io/projected/8c56e13a-8335-4d41-9bb3-5894d35814dd-kube-api-access-pv4s4\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.070965 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9krr\" (UniqueName: \"kubernetes.io/projected/18caf88a-0da7-4144-9c11-301f0a49f3fb-kube-api-access-w9krr\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.071136 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18caf88a-0da7-4144-9c11-301f0a49f3fb-combined-ca-bundle\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.071293 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/18caf88a-0da7-4144-9c11-301f0a49f3fb-ovn-rundir\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.071433 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18caf88a-0da7-4144-9c11-301f0a49f3fb-config\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.071845 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.072056 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/18caf88a-0da7-4144-9c11-301f0a49f3fb-ovs-rundir\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.072148 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.072303 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.072473 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jkq2c" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.072741 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/18caf88a-0da7-4144-9c11-301f0a49f3fb-ovn-rundir\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.078479 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18caf88a-0da7-4144-9c11-301f0a49f3fb-combined-ca-bundle\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.082202 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.102156 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/18caf88a-0da7-4144-9c11-301f0a49f3fb-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.102680 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9krr\" (UniqueName: \"kubernetes.io/projected/18caf88a-0da7-4144-9c11-301f0a49f3fb-kube-api-access-w9krr\") pod \"ovn-controller-metrics-jxnnh\" (UID: \"18caf88a-0da7-4144-9c11-301f0a49f3fb\") " pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.156774 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.172940 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173347 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173372 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-scripts\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173433 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173466 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzlr\" (UniqueName: \"kubernetes.io/projected/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-kube-api-access-8gzlr\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173529 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173584 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-config\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.173971 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.174011 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.174061 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-config\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.174108 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4s4\" (UniqueName: \"kubernetes.io/projected/8c56e13a-8335-4d41-9bb3-5894d35814dd-kube-api-access-pv4s4\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.174781 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.175384 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.175959 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-config\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.179092 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.187115 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.214499 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4s4\" (UniqueName: \"kubernetes.io/projected/8c56e13a-8335-4d41-9bb3-5894d35814dd-kube-api-access-pv4s4\") pod \"dnsmasq-dns-86db49b7ff-7wkqm\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.215208 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jxnnh" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.275283 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzlr\" (UniqueName: \"kubernetes.io/projected/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-kube-api-access-8gzlr\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.275798 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.275901 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.276199 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-config\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.277166 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.277674 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.277703 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-scripts\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.278711 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-config\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.279392 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.279591 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-scripts\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.282239 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.284867 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.290702 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.295184 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzlr\" (UniqueName: \"kubernetes.io/projected/b06f6c03-dbba-48c9-901d-8cf6ef8048b1-kube-api-access-8gzlr\") pod \"ovn-northd-0\" (UID: \"b06f6c03-dbba-48c9-901d-8cf6ef8048b1\") " pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.316284 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.379411 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-dns-svc\") pod \"323e207f-794c-4f9e-8a02-b567237b08e6\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.379677 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-config\") pod \"323e207f-794c-4f9e-8a02-b567237b08e6\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.379717 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wfsf\" (UniqueName: \"kubernetes.io/projected/323e207f-794c-4f9e-8a02-b567237b08e6-kube-api-access-7wfsf\") pod \"323e207f-794c-4f9e-8a02-b567237b08e6\" (UID: \"323e207f-794c-4f9e-8a02-b567237b08e6\") " Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.389069 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323e207f-794c-4f9e-8a02-b567237b08e6-kube-api-access-7wfsf" (OuterVolumeSpecName: "kube-api-access-7wfsf") pod "323e207f-794c-4f9e-8a02-b567237b08e6" (UID: "323e207f-794c-4f9e-8a02-b567237b08e6"). InnerVolumeSpecName "kube-api-access-7wfsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.458919 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "323e207f-794c-4f9e-8a02-b567237b08e6" (UID: "323e207f-794c-4f9e-8a02-b567237b08e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.479213 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-config" (OuterVolumeSpecName: "config") pod "323e207f-794c-4f9e-8a02-b567237b08e6" (UID: "323e207f-794c-4f9e-8a02-b567237b08e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.482142 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.482168 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wfsf\" (UniqueName: \"kubernetes.io/projected/323e207f-794c-4f9e-8a02-b567237b08e6-kube-api-access-7wfsf\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.482180 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323e207f-794c-4f9e-8a02-b567237b08e6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.488942 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.501687 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.520550 4983 generic.go:334] "Generic (PLEG): container finished" podID="323e207f-794c-4f9e-8a02-b567237b08e6" containerID="e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c" exitCode=0 Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.520625 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.520653 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" event={"ID":"323e207f-794c-4f9e-8a02-b567237b08e6","Type":"ContainerDied","Data":"e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c"} Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.520696 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-k69m2" event={"ID":"323e207f-794c-4f9e-8a02-b567237b08e6","Type":"ContainerDied","Data":"d4248b75e459ddbada65085cc16a2d2e1b05e29da655565c8fdfc2010ccdb092"} Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.520719 4983 scope.go:117] "RemoveContainer" containerID="e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.525034 4983 generic.go:334] "Generic (PLEG): container finished" podID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerID="00e24018b9831506f247126b75cc66bb4e0f831ff1f1f124eeed1811433d1c91" exitCode=0 Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.525429 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" event={"ID":"6bbf6f55-78b8-41e6-9e55-be10664ab74e","Type":"ContainerDied","Data":"00e24018b9831506f247126b75cc66bb4e0f831ff1f1f124eeed1811433d1c91"} Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.552863 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.559632 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k69m2"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.582440 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-k69m2"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.595466 4983 scope.go:117] "RemoveContainer" containerID="15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.625624 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" path="/var/lib/kubelet/pods/323e207f-794c-4f9e-8a02-b567237b08e6/volumes" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.650293 4983 scope.go:117] "RemoveContainer" containerID="e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c" Nov 25 20:42:41 crc kubenswrapper[4983]: E1125 20:42:41.650772 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c\": container with ID starting with e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c not found: ID does not exist" containerID="e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.650804 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c"} err="failed to get container status \"e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c\": rpc error: code = NotFound desc = could not find container \"e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c\": container with ID starting with e97abd7e7d82f495aab2fb9d2740e40226d0ffabd2c706469ff9f22ace93592c not found: ID does not exist" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.650826 4983 scope.go:117] "RemoveContainer" containerID="15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f" Nov 25 20:42:41 crc kubenswrapper[4983]: E1125 20:42:41.651538 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f\": container with ID starting with 15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f not found: ID does not exist" containerID="15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.651599 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f"} err="failed to get container status \"15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f\": rpc error: code = NotFound desc = could not find container \"15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f\": container with ID starting with 15142d764ee9f095124a8679bebc8a0dce2a3de7a3366d12287c646872bcc21f not found: ID does not exist" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.686677 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8jsn\" (UniqueName: \"kubernetes.io/projected/6bbf6f55-78b8-41e6-9e55-be10664ab74e-kube-api-access-h8jsn\") pod \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.687184 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-dns-svc\") pod \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.687468 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-config\") pod \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\" (UID: \"6bbf6f55-78b8-41e6-9e55-be10664ab74e\") " Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.692924 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bbf6f55-78b8-41e6-9e55-be10664ab74e-kube-api-access-h8jsn" (OuterVolumeSpecName: "kube-api-access-h8jsn") pod "6bbf6f55-78b8-41e6-9e55-be10664ab74e" (UID: "6bbf6f55-78b8-41e6-9e55-be10664ab74e"). InnerVolumeSpecName "kube-api-access-h8jsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.730067 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-config" (OuterVolumeSpecName: "config") pod "6bbf6f55-78b8-41e6-9e55-be10664ab74e" (UID: "6bbf6f55-78b8-41e6-9e55-be10664ab74e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.730173 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bbf6f55-78b8-41e6-9e55-be10664ab74e" (UID: "6bbf6f55-78b8-41e6-9e55-be10664ab74e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.751169 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cg64x"] Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.789793 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8jsn\" (UniqueName: \"kubernetes.io/projected/6bbf6f55-78b8-41e6-9e55-be10664ab74e-kube-api-access-h8jsn\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.790256 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.790271 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bbf6f55-78b8-41e6-9e55-be10664ab74e-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:41 crc kubenswrapper[4983]: I1125 20:42:41.916253 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jxnnh"] Nov 25 20:42:41 crc kubenswrapper[4983]: W1125 20:42:41.921638 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18caf88a_0da7_4144_9c11_301f0a49f3fb.slice/crio-a676b66278138a01915cec111293fa6b0d07bc1bc652c9d9bc3fe909b43b3d37 WatchSource:0}: Error finding container a676b66278138a01915cec111293fa6b0d07bc1bc652c9d9bc3fe909b43b3d37: Status 404 returned error can't find the container with id a676b66278138a01915cec111293fa6b0d07bc1bc652c9d9bc3fe909b43b3d37 Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.045183 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wkqm"] Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.126445 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 20:42:42 crc kubenswrapper[4983]: W1125 20:42:42.133580 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb06f6c03_dbba_48c9_901d_8cf6ef8048b1.slice/crio-64f74f4d4947c22cb574d7c54020c850e14ca91f561a13d4bad25d71490bcd54 WatchSource:0}: Error finding container 64f74f4d4947c22cb574d7c54020c850e14ca91f561a13d4bad25d71490bcd54: Status 404 returned error can't find the container with id 64f74f4d4947c22cb574d7c54020c850e14ca91f561a13d4bad25d71490bcd54 Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.535278 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b06f6c03-dbba-48c9-901d-8cf6ef8048b1","Type":"ContainerStarted","Data":"64f74f4d4947c22cb574d7c54020c850e14ca91f561a13d4bad25d71490bcd54"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.537392 4983 generic.go:334] "Generic (PLEG): container finished" podID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerID="91cf899d5e69fe256f97e8cee651336a5d8600a69a66b58fe1c970be0aa7ae28" exitCode=0 Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.537526 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" event={"ID":"8c56e13a-8335-4d41-9bb3-5894d35814dd","Type":"ContainerDied","Data":"91cf899d5e69fe256f97e8cee651336a5d8600a69a66b58fe1c970be0aa7ae28"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.537609 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" event={"ID":"8c56e13a-8335-4d41-9bb3-5894d35814dd","Type":"ContainerStarted","Data":"6ede1fa9c4f2b1201f275633a62bc90096b0c274cab72bff923a5db8ace220c2"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.540528 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jxnnh" event={"ID":"18caf88a-0da7-4144-9c11-301f0a49f3fb","Type":"ContainerStarted","Data":"6e231fc52d0a24bda3843c3547cb01840b9d342b1c1de1cd79976156351b6e64"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.540596 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jxnnh" event={"ID":"18caf88a-0da7-4144-9c11-301f0a49f3fb","Type":"ContainerStarted","Data":"a676b66278138a01915cec111293fa6b0d07bc1bc652c9d9bc3fe909b43b3d37"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.545209 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" event={"ID":"6bbf6f55-78b8-41e6-9e55-be10664ab74e","Type":"ContainerDied","Data":"7e394cfb32f714217cbdf8a2a65aea19c651be04b16831e8c4ae14d32be251ec"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.545214 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-vj48c" Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.545299 4983 scope.go:117] "RemoveContainer" containerID="00e24018b9831506f247126b75cc66bb4e0f831ff1f1f124eeed1811433d1c91" Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.550182 4983 generic.go:334] "Generic (PLEG): container finished" podID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerID="2d373dab235b5f395ef334838b12f187d7340a76d760bc9421a4a7741a851141" exitCode=0 Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.551249 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" event={"ID":"e2f07d9e-1a48-4e50-9d3b-93aec12276f5","Type":"ContainerDied","Data":"2d373dab235b5f395ef334838b12f187d7340a76d760bc9421a4a7741a851141"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.551330 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" event={"ID":"e2f07d9e-1a48-4e50-9d3b-93aec12276f5","Type":"ContainerStarted","Data":"bc33acbb244abe15c0f0aed544da6188a08ac211d72b83b48302caa9910b48e3"} Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.587106 4983 scope.go:117] "RemoveContainer" containerID="6248f92777a0d4cbb4d57e08c0903b67aa348b64fe45a84210f2e209107ddbf6" Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.624751 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jxnnh" podStartSLOduration=2.6247203199999998 podStartE2EDuration="2.62472032s" podCreationTimestamp="2025-11-25 20:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:42:42.612733483 +0000 UTC m=+943.725266875" watchObservedRunningTime="2025-11-25 20:42:42.62472032 +0000 UTC m=+943.737253712" Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.659284 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vj48c"] Nov 25 20:42:42 crc kubenswrapper[4983]: I1125 20:42:42.672768 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-vj48c"] Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.563447 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" event={"ID":"8c56e13a-8335-4d41-9bb3-5894d35814dd","Type":"ContainerStarted","Data":"9f301d23f12460edabe2161b03afd5f1c0115d0d5d5f69dd4928cfc1265f6504"} Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.577888 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" event={"ID":"e2f07d9e-1a48-4e50-9d3b-93aec12276f5","Type":"ContainerStarted","Data":"cc34d918b0a1c6505c9560edd62c01d25a8c59df4450f75c78126fbd9ebe7e52"} Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.578088 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.581095 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b06f6c03-dbba-48c9-901d-8cf6ef8048b1","Type":"ContainerStarted","Data":"b53d1fa9fdda340aa8fda9b096fffe827804773ab76a36e04130fd6cad67649f"} Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.581674 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.594500 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" podStartSLOduration=3.594470317 podStartE2EDuration="3.594470317s" podCreationTimestamp="2025-11-25 20:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:42:43.589389043 +0000 UTC m=+944.701922435" watchObservedRunningTime="2025-11-25 20:42:43.594470317 +0000 UTC m=+944.707003709" Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.622539 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" path="/var/lib/kubelet/pods/6bbf6f55-78b8-41e6-9e55-be10664ab74e/volumes" Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.622586 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" podStartSLOduration=3.62252933 podStartE2EDuration="3.62252933s" podCreationTimestamp="2025-11-25 20:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:42:43.614172539 +0000 UTC m=+944.726705931" watchObservedRunningTime="2025-11-25 20:42:43.62252933 +0000 UTC m=+944.735062752" Nov 25 20:42:43 crc kubenswrapper[4983]: I1125 20:42:43.639321 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.616500932 podStartE2EDuration="3.639294684s" podCreationTimestamp="2025-11-25 20:42:40 +0000 UTC" firstStartedPulling="2025-11-25 20:42:42.135860176 +0000 UTC m=+943.248393588" lastFinishedPulling="2025-11-25 20:42:43.158653928 +0000 UTC m=+944.271187340" observedRunningTime="2025-11-25 20:42:43.638116003 +0000 UTC m=+944.750649395" watchObservedRunningTime="2025-11-25 20:42:43.639294684 +0000 UTC m=+944.751828116" Nov 25 20:42:44 crc kubenswrapper[4983]: I1125 20:42:44.443510 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 20:42:44 crc kubenswrapper[4983]: I1125 20:42:44.443597 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 20:42:44 crc kubenswrapper[4983]: I1125 20:42:44.517272 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 20:42:44 crc kubenswrapper[4983]: I1125 20:42:44.590304 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b06f6c03-dbba-48c9-901d-8cf6ef8048b1","Type":"ContainerStarted","Data":"98a1b86342b1f10ed9eda426d451c846b486021ce4c16ffe6dcb22d8ad292508"} Nov 25 20:42:44 crc kubenswrapper[4983]: I1125 20:42:44.591228 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:44 crc kubenswrapper[4983]: I1125 20:42:44.688871 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.852821 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-89aa-account-create-update-gpfwm"] Nov 25 20:42:45 crc kubenswrapper[4983]: E1125 20:42:45.853314 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerName="init" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.853340 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerName="init" Nov 25 20:42:45 crc kubenswrapper[4983]: E1125 20:42:45.853367 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" containerName="init" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.853380 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" containerName="init" Nov 25 20:42:45 crc kubenswrapper[4983]: E1125 20:42:45.853418 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" containerName="dnsmasq-dns" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.853433 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" containerName="dnsmasq-dns" Nov 25 20:42:45 crc kubenswrapper[4983]: E1125 20:42:45.853461 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerName="dnsmasq-dns" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.853477 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerName="dnsmasq-dns" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.853781 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e207f-794c-4f9e-8a02-b567237b08e6" containerName="dnsmasq-dns" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.853805 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bbf6f55-78b8-41e6-9e55-be10664ab74e" containerName="dnsmasq-dns" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.854780 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.863416 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.873834 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.873885 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.876867 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-s9cqc"] Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.879460 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.894266 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-89aa-account-create-update-gpfwm"] Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.915823 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s9cqc"] Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.974529 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bkk\" (UniqueName: \"kubernetes.io/projected/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-kube-api-access-86bkk\") pod \"keystone-89aa-account-create-update-gpfwm\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.974929 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6wn\" (UniqueName: \"kubernetes.io/projected/55267844-5a91-44d3-b4bc-6292f74eb7bb-kube-api-access-kn6wn\") pod \"keystone-db-create-s9cqc\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.975058 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-operator-scripts\") pod \"keystone-89aa-account-create-update-gpfwm\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:45 crc kubenswrapper[4983]: I1125 20:42:45.975191 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55267844-5a91-44d3-b4bc-6292f74eb7bb-operator-scripts\") pod \"keystone-db-create-s9cqc\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.042288 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-fz9jq"] Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.043990 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.056279 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fz9jq"] Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.077340 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55267844-5a91-44d3-b4bc-6292f74eb7bb-operator-scripts\") pod \"keystone-db-create-s9cqc\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.077822 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bkk\" (UniqueName: \"kubernetes.io/projected/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-kube-api-access-86bkk\") pod \"keystone-89aa-account-create-update-gpfwm\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.077922 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn6wn\" (UniqueName: \"kubernetes.io/projected/55267844-5a91-44d3-b4bc-6292f74eb7bb-kube-api-access-kn6wn\") pod \"keystone-db-create-s9cqc\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.078040 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-operator-scripts\") pod \"keystone-89aa-account-create-update-gpfwm\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.078671 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-operator-scripts\") pod \"keystone-89aa-account-create-update-gpfwm\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.078779 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55267844-5a91-44d3-b4bc-6292f74eb7bb-operator-scripts\") pod \"keystone-db-create-s9cqc\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.104723 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bkk\" (UniqueName: \"kubernetes.io/projected/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-kube-api-access-86bkk\") pod \"keystone-89aa-account-create-update-gpfwm\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.114421 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn6wn\" (UniqueName: \"kubernetes.io/projected/55267844-5a91-44d3-b4bc-6292f74eb7bb-kube-api-access-kn6wn\") pod \"keystone-db-create-s9cqc\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.153478 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c89-account-create-update-5nbdz"] Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.154502 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.157604 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.161353 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c89-account-create-update-5nbdz"] Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.179949 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-operator-scripts\") pod \"placement-db-create-fz9jq\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.180009 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqgl\" (UniqueName: \"kubernetes.io/projected/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-kube-api-access-blqgl\") pod \"placement-db-create-fz9jq\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.189915 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.209124 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.281763 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhls\" (UniqueName: \"kubernetes.io/projected/9e1808b9-63ae-48e4-8516-a119424817b7-kube-api-access-9fhls\") pod \"placement-7c89-account-create-update-5nbdz\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.281877 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1808b9-63ae-48e4-8516-a119424817b7-operator-scripts\") pod \"placement-7c89-account-create-update-5nbdz\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.281984 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-operator-scripts\") pod \"placement-db-create-fz9jq\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.282040 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqgl\" (UniqueName: \"kubernetes.io/projected/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-kube-api-access-blqgl\") pod \"placement-db-create-fz9jq\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.283165 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-operator-scripts\") pod \"placement-db-create-fz9jq\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.314325 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqgl\" (UniqueName: \"kubernetes.io/projected/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-kube-api-access-blqgl\") pod \"placement-db-create-fz9jq\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.362278 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.384287 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1808b9-63ae-48e4-8516-a119424817b7-operator-scripts\") pod \"placement-7c89-account-create-update-5nbdz\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.384429 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhls\" (UniqueName: \"kubernetes.io/projected/9e1808b9-63ae-48e4-8516-a119424817b7-kube-api-access-9fhls\") pod \"placement-7c89-account-create-update-5nbdz\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.385137 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1808b9-63ae-48e4-8516-a119424817b7-operator-scripts\") pod \"placement-7c89-account-create-update-5nbdz\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.403768 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhls\" (UniqueName: \"kubernetes.io/projected/9e1808b9-63ae-48e4-8516-a119424817b7-kube-api-access-9fhls\") pod \"placement-7c89-account-create-update-5nbdz\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.580191 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.757549 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-89aa-account-create-update-gpfwm"] Nov 25 20:42:46 crc kubenswrapper[4983]: W1125 20:42:46.767619 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55267844_5a91_44d3_b4bc_6292f74eb7bb.slice/crio-b57d06c7047d5d679fc75933070a895678ce2d36f143fb394267f798ade333d5 WatchSource:0}: Error finding container b57d06c7047d5d679fc75933070a895678ce2d36f143fb394267f798ade333d5: Status 404 returned error can't find the container with id b57d06c7047d5d679fc75933070a895678ce2d36f143fb394267f798ade333d5 Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.781322 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s9cqc"] Nov 25 20:42:46 crc kubenswrapper[4983]: I1125 20:42:46.953050 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-fz9jq"] Nov 25 20:42:46 crc kubenswrapper[4983]: W1125 20:42:46.968496 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4e8454_9ea6_414b_83a4_6c8a16cf983e.slice/crio-edcdc21970b800bf8b3c2d78977936f963a90967d88cc093b8d1fd9e4f7b6d3c WatchSource:0}: Error finding container edcdc21970b800bf8b3c2d78977936f963a90967d88cc093b8d1fd9e4f7b6d3c: Status 404 returned error can't find the container with id edcdc21970b800bf8b3c2d78977936f963a90967d88cc093b8d1fd9e4f7b6d3c Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.190144 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c89-account-create-update-5nbdz"] Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.619460 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fz9jq" event={"ID":"4f4e8454-9ea6-414b-83a4-6c8a16cf983e","Type":"ContainerStarted","Data":"edcdc21970b800bf8b3c2d78977936f963a90967d88cc093b8d1fd9e4f7b6d3c"} Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.621205 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89aa-account-create-update-gpfwm" event={"ID":"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed","Type":"ContainerStarted","Data":"a8eeec860effe48acb9bbc0f56b37297416eb006ebe721556e73160b0f1a5e4e"} Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.622454 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c89-account-create-update-5nbdz" event={"ID":"9e1808b9-63ae-48e4-8516-a119424817b7","Type":"ContainerStarted","Data":"ad9f808fd596463af8d3b1bad0fd0d5f20f75bf2580a4786547d2eb60f4d05ad"} Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.623375 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s9cqc" event={"ID":"55267844-5a91-44d3-b4bc-6292f74eb7bb","Type":"ContainerStarted","Data":"b57d06c7047d5d679fc75933070a895678ce2d36f143fb394267f798ade333d5"} Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.802777 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.957955 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cg64x"] Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.958244 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="dnsmasq-dns" containerID="cri-o://cc34d918b0a1c6505c9560edd62c01d25a8c59df4450f75c78126fbd9ebe7e52" gracePeriod=10 Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.959699 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.985680 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-25mgn"] Nov 25 20:42:47 crc kubenswrapper[4983]: I1125 20:42:47.993493 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.007887 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-25mgn"] Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.129855 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-config\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.129935 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.130010 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-dns-svc\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.130037 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.130057 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htt75\" (UniqueName: \"kubernetes.io/projected/c26851c2-2ee8-457b-926b-2ccf02fb308e-kube-api-access-htt75\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.232125 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-dns-svc\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.232187 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.232208 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htt75\" (UniqueName: \"kubernetes.io/projected/c26851c2-2ee8-457b-926b-2ccf02fb308e-kube-api-access-htt75\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.232282 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-config\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.232343 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.233621 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-dns-svc\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.233685 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.233706 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-config\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.233678 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.255493 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htt75\" (UniqueName: \"kubernetes.io/projected/c26851c2-2ee8-457b-926b-2ccf02fb308e-kube-api-access-htt75\") pod \"dnsmasq-dns-698758b865-25mgn\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.317596 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.633922 4983 generic.go:334] "Generic (PLEG): container finished" podID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerID="cc34d918b0a1c6505c9560edd62c01d25a8c59df4450f75c78126fbd9ebe7e52" exitCode=0 Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.633977 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" event={"ID":"e2f07d9e-1a48-4e50-9d3b-93aec12276f5","Type":"ContainerDied","Data":"cc34d918b0a1c6505c9560edd62c01d25a8c59df4450f75c78126fbd9ebe7e52"} Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.732811 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.869344 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-25mgn"] Nov 25 20:42:48 crc kubenswrapper[4983]: W1125 20:42:48.877298 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc26851c2_2ee8_457b_926b_2ccf02fb308e.slice/crio-50e9d7a2f6048553bd55c4c3bfd9e9363709b8fa286235f0e4a2cd88671ea109 WatchSource:0}: Error finding container 50e9d7a2f6048553bd55c4c3bfd9e9363709b8fa286235f0e4a2cd88671ea109: Status 404 returned error can't find the container with id 50e9d7a2f6048553bd55c4c3bfd9e9363709b8fa286235f0e4a2cd88671ea109 Nov 25 20:42:48 crc kubenswrapper[4983]: I1125 20:42:48.954474 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="13cd3da7-02fa-42c2-a62a-527df23e92b1" containerName="galera" probeResult="failure" output=< Nov 25 20:42:48 crc kubenswrapper[4983]: wsrep_local_state_comment (Joined) differs from Synced Nov 25 20:42:48 crc kubenswrapper[4983]: > Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.066768 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.080710 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.085868 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.085905 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.086604 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gvwgd" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.089383 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.114934 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.152439 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.152490 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-cache\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.152652 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp594\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-kube-api-access-mp594\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.152674 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-lock\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.152694 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.254677 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.254732 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-cache\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.254841 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp594\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-kube-api-access-mp594\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.254860 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-lock\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.254881 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: E1125 20:42:49.255084 4983 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 20:42:49 crc kubenswrapper[4983]: E1125 20:42:49.255101 4983 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 20:42:49 crc kubenswrapper[4983]: E1125 20:42:49.255173 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift podName:214288a7-ce6d-4844-b3f6-8ab78b7e1b54 nodeName:}" failed. No retries permitted until 2025-11-25 20:42:49.755147708 +0000 UTC m=+950.867681100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift") pod "swift-storage-0" (UID: "214288a7-ce6d-4844-b3f6-8ab78b7e1b54") : configmap "swift-ring-files" not found Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.255199 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.256000 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-cache\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.256124 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-lock\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.278087 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp594\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-kube-api-access-mp594\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.279035 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.559613 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-rln2d"] Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.561046 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.563520 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.564239 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.566804 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.589195 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rln2d"] Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.646051 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-25mgn" event={"ID":"c26851c2-2ee8-457b-926b-2ccf02fb308e","Type":"ContainerStarted","Data":"50e9d7a2f6048553bd55c4c3bfd9e9363709b8fa286235f0e4a2cd88671ea109"} Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.686573 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-combined-ca-bundle\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.686690 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-swiftconf\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.686733 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-ring-data-devices\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.686917 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r82q\" (UniqueName: \"kubernetes.io/projected/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-kube-api-access-8r82q\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.687007 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-etc-swift\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.687033 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-dispersionconf\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.687088 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-scripts\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.790544 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r82q\" (UniqueName: \"kubernetes.io/projected/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-kube-api-access-8r82q\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.790899 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-etc-swift\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.790923 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-dispersionconf\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.790970 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-scripts\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.791035 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-combined-ca-bundle\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.791076 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-swiftconf\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.791098 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-ring-data-devices\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.791265 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:49 crc kubenswrapper[4983]: E1125 20:42:49.791543 4983 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 20:42:49 crc kubenswrapper[4983]: E1125 20:42:49.791599 4983 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 20:42:49 crc kubenswrapper[4983]: E1125 20:42:49.791651 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift podName:214288a7-ce6d-4844-b3f6-8ab78b7e1b54 nodeName:}" failed. No retries permitted until 2025-11-25 20:42:50.791634413 +0000 UTC m=+951.904167805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift") pod "swift-storage-0" (UID: "214288a7-ce6d-4844-b3f6-8ab78b7e1b54") : configmap "swift-ring-files" not found Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.792255 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-etc-swift\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.793449 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-ring-data-devices\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.793718 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-scripts\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.802058 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-swiftconf\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.802287 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-combined-ca-bundle\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.803050 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-dispersionconf\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.810854 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r82q\" (UniqueName: \"kubernetes.io/projected/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-kube-api-access-8r82q\") pod \"swift-ring-rebalance-rln2d\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:49 crc kubenswrapper[4983]: I1125 20:42:49.881250 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.473909 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-rln2d"] Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.655274 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c89-account-create-update-5nbdz" event={"ID":"9e1808b9-63ae-48e4-8516-a119424817b7","Type":"ContainerStarted","Data":"46dab533f0ad3a4c85a7b94041e3d6360ca71834afa248a30225039883bfe089"} Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.657917 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s9cqc" event={"ID":"55267844-5a91-44d3-b4bc-6292f74eb7bb","Type":"ContainerStarted","Data":"420cfcf298db8455384ae8635ec1ea1e6d3dd8278de901ac8ad966eb9eb9faf3"} Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.659830 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fz9jq" event={"ID":"4f4e8454-9ea6-414b-83a4-6c8a16cf983e","Type":"ContainerStarted","Data":"45bc78a7da72340435f26bf3f481c2ddb2ea1df2f26bf5d77fa040f113d460dd"} Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.661238 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rln2d" event={"ID":"9681d7cb-ab2c-4458-bc07-a7d278f16fd2","Type":"ContainerStarted","Data":"ad1214c5a956290ecc9096e242ceb0149b1df92c32729b30dcc0052f9113a2ba"} Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.662838 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89aa-account-create-update-gpfwm" event={"ID":"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed","Type":"ContainerStarted","Data":"28e1253da13187ec01ee742a444816a50da8582c6d2bda3fc0e1b1fecc7abbc9"} Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.681423 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-89aa-account-create-update-gpfwm" podStartSLOduration=5.681401872 podStartE2EDuration="5.681401872s" podCreationTimestamp="2025-11-25 20:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:42:50.676890473 +0000 UTC m=+951.789423865" watchObservedRunningTime="2025-11-25 20:42:50.681401872 +0000 UTC m=+951.793935274" Nov 25 20:42:50 crc kubenswrapper[4983]: I1125 20:42:50.854482 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:50 crc kubenswrapper[4983]: E1125 20:42:50.854716 4983 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 20:42:50 crc kubenswrapper[4983]: E1125 20:42:50.854755 4983 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 20:42:50 crc kubenswrapper[4983]: E1125 20:42:50.854827 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift podName:214288a7-ce6d-4844-b3f6-8ab78b7e1b54 nodeName:}" failed. No retries permitted until 2025-11-25 20:42:52.854807344 +0000 UTC m=+953.967340736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift") pod "swift-storage-0" (UID: "214288a7-ce6d-4844-b3f6-8ab78b7e1b54") : configmap "swift-ring-files" not found Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.299599 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c2ld6"] Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.301398 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.324660 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c2ld6"] Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.376820 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03c578b0-726b-457d-a2b4-a3582ea1704c-operator-scripts\") pod \"glance-db-create-c2ld6\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.376896 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqn7\" (UniqueName: \"kubernetes.io/projected/03c578b0-726b-457d-a2b4-a3582ea1704c-kube-api-access-zqqn7\") pod \"glance-db-create-c2ld6\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.470189 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8345-account-create-update-pvqgp"] Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.482959 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8345-account-create-update-pvqgp"] Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.478675 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03c578b0-726b-457d-a2b4-a3582ea1704c-operator-scripts\") pod \"glance-db-create-c2ld6\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.477718 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03c578b0-726b-457d-a2b4-a3582ea1704c-operator-scripts\") pod \"glance-db-create-c2ld6\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.484402 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqn7\" (UniqueName: \"kubernetes.io/projected/03c578b0-726b-457d-a2b4-a3582ea1704c-kube-api-access-zqqn7\") pod \"glance-db-create-c2ld6\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.484069 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.487972 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.491662 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.517944 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.521836 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqn7\" (UniqueName: \"kubernetes.io/projected/03c578b0-726b-457d-a2b4-a3582ea1704c-kube-api-access-zqqn7\") pod \"glance-db-create-c2ld6\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.588382 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-config\") pod \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.588546 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-dns-svc\") pod \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.592060 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-ovsdbserver-nb\") pod \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.592165 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n57p6\" (UniqueName: \"kubernetes.io/projected/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-kube-api-access-n57p6\") pod \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\" (UID: \"e2f07d9e-1a48-4e50-9d3b-93aec12276f5\") " Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.592793 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98ad7c-5111-4876-b5ca-4196c92b2cce-operator-scripts\") pod \"glance-8345-account-create-update-pvqgp\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.592937 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm45d\" (UniqueName: \"kubernetes.io/projected/fb98ad7c-5111-4876-b5ca-4196c92b2cce-kube-api-access-dm45d\") pod \"glance-8345-account-create-update-pvqgp\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.596065 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.610725 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-kube-api-access-n57p6" (OuterVolumeSpecName: "kube-api-access-n57p6") pod "e2f07d9e-1a48-4e50-9d3b-93aec12276f5" (UID: "e2f07d9e-1a48-4e50-9d3b-93aec12276f5"). InnerVolumeSpecName "kube-api-access-n57p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.647139 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-config" (OuterVolumeSpecName: "config") pod "e2f07d9e-1a48-4e50-9d3b-93aec12276f5" (UID: "e2f07d9e-1a48-4e50-9d3b-93aec12276f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.657913 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2f07d9e-1a48-4e50-9d3b-93aec12276f5" (UID: "e2f07d9e-1a48-4e50-9d3b-93aec12276f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.669706 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2f07d9e-1a48-4e50-9d3b-93aec12276f5" (UID: "e2f07d9e-1a48-4e50-9d3b-93aec12276f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.688311 4983 generic.go:334] "Generic (PLEG): container finished" podID="2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" containerID="28e1253da13187ec01ee742a444816a50da8582c6d2bda3fc0e1b1fecc7abbc9" exitCode=0 Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.691258 4983 generic.go:334] "Generic (PLEG): container finished" podID="9e1808b9-63ae-48e4-8516-a119424817b7" containerID="46dab533f0ad3a4c85a7b94041e3d6360ca71834afa248a30225039883bfe089" exitCode=0 Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.694744 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm45d\" (UniqueName: \"kubernetes.io/projected/fb98ad7c-5111-4876-b5ca-4196c92b2cce-kube-api-access-dm45d\") pod \"glance-8345-account-create-update-pvqgp\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.694946 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98ad7c-5111-4876-b5ca-4196c92b2cce-operator-scripts\") pod \"glance-8345-account-create-update-pvqgp\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.694995 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.695007 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.695020 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n57p6\" (UniqueName: \"kubernetes.io/projected/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-kube-api-access-n57p6\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.695031 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2f07d9e-1a48-4e50-9d3b-93aec12276f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.695764 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98ad7c-5111-4876-b5ca-4196c92b2cce-operator-scripts\") pod \"glance-8345-account-create-update-pvqgp\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.697455 4983 generic.go:334] "Generic (PLEG): container finished" podID="55267844-5a91-44d3-b4bc-6292f74eb7bb" containerID="420cfcf298db8455384ae8635ec1ea1e6d3dd8278de901ac8ad966eb9eb9faf3" exitCode=0 Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.702241 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.725045 4983 kubelet_pods.go:2476] "Failed to reduce cpu time for pod pending volume cleanup" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" err="openat2 /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2f07d9e_1a48_4e50_9d3b_93aec12276f5.slice/cgroup.controllers: no such file or directory" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.725129 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89aa-account-create-update-gpfwm" event={"ID":"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed","Type":"ContainerDied","Data":"28e1253da13187ec01ee742a444816a50da8582c6d2bda3fc0e1b1fecc7abbc9"} Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.725164 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c89-account-create-update-5nbdz" event={"ID":"9e1808b9-63ae-48e4-8516-a119424817b7","Type":"ContainerDied","Data":"46dab533f0ad3a4c85a7b94041e3d6360ca71834afa248a30225039883bfe089"} Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.725180 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s9cqc" event={"ID":"55267844-5a91-44d3-b4bc-6292f74eb7bb","Type":"ContainerDied","Data":"420cfcf298db8455384ae8635ec1ea1e6d3dd8278de901ac8ad966eb9eb9faf3"} Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.725197 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" event={"ID":"e2f07d9e-1a48-4e50-9d3b-93aec12276f5","Type":"ContainerDied","Data":"bc33acbb244abe15c0f0aed544da6188a08ac211d72b83b48302caa9910b48e3"} Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.725222 4983 scope.go:117] "RemoveContainer" containerID="cc34d918b0a1c6505c9560edd62c01d25a8c59df4450f75c78126fbd9ebe7e52" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.731141 4983 generic.go:334] "Generic (PLEG): container finished" podID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerID="8d2c64750a3750b42b134ba232389fdb04d7126680a8ba586664c484570e9553" exitCode=0 Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.732179 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-25mgn" event={"ID":"c26851c2-2ee8-457b-926b-2ccf02fb308e","Type":"ContainerDied","Data":"8d2c64750a3750b42b134ba232389fdb04d7126680a8ba586664c484570e9553"} Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.741741 4983 generic.go:334] "Generic (PLEG): container finished" podID="4f4e8454-9ea6-414b-83a4-6c8a16cf983e" containerID="45bc78a7da72340435f26bf3f481c2ddb2ea1df2f26bf5d77fa040f113d460dd" exitCode=0 Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.741751 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fz9jq" event={"ID":"4f4e8454-9ea6-414b-83a4-6c8a16cf983e","Type":"ContainerDied","Data":"45bc78a7da72340435f26bf3f481c2ddb2ea1df2f26bf5d77fa040f113d460dd"} Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.747482 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm45d\" (UniqueName: \"kubernetes.io/projected/fb98ad7c-5111-4876-b5ca-4196c92b2cce-kube-api-access-dm45d\") pod \"glance-8345-account-create-update-pvqgp\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.820389 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cg64x"] Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.835634 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-cg64x"] Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.836594 4983 scope.go:117] "RemoveContainer" containerID="2d373dab235b5f395ef334838b12f187d7340a76d760bc9421a4a7741a851141" Nov 25 20:42:51 crc kubenswrapper[4983]: I1125 20:42:51.907807 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.178222 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c2ld6"] Nov 25 20:42:52 crc kubenswrapper[4983]: W1125 20:42:52.196848 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03c578b0_726b_457d_a2b4_a3582ea1704c.slice/crio-969733a08ed135c3c45cf0feca2244f6ade5409a77b10c93f1abecd5d89b0968 WatchSource:0}: Error finding container 969733a08ed135c3c45cf0feca2244f6ade5409a77b10c93f1abecd5d89b0968: Status 404 returned error can't find the container with id 969733a08ed135c3c45cf0feca2244f6ade5409a77b10c93f1abecd5d89b0968 Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.385633 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8345-account-create-update-pvqgp"] Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.752999 4983 generic.go:334] "Generic (PLEG): container finished" podID="03c578b0-726b-457d-a2b4-a3582ea1704c" containerID="17be22fa7cf60d0d919532014e892f4b180e68e4e523db50add371c2f2f11562" exitCode=0 Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.753042 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c2ld6" event={"ID":"03c578b0-726b-457d-a2b4-a3582ea1704c","Type":"ContainerDied","Data":"17be22fa7cf60d0d919532014e892f4b180e68e4e523db50add371c2f2f11562"} Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.753587 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c2ld6" event={"ID":"03c578b0-726b-457d-a2b4-a3582ea1704c","Type":"ContainerStarted","Data":"969733a08ed135c3c45cf0feca2244f6ade5409a77b10c93f1abecd5d89b0968"} Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.755512 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-25mgn" event={"ID":"c26851c2-2ee8-457b-926b-2ccf02fb308e","Type":"ContainerStarted","Data":"3e56b34fe5d51c5ce585d74b4f668b8b6fa7940df3cee881641fadfbd775ce18"} Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.755919 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.797763 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-25mgn" podStartSLOduration=5.797741037 podStartE2EDuration="5.797741037s" podCreationTimestamp="2025-11-25 20:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:42:52.791147843 +0000 UTC m=+953.903681245" watchObservedRunningTime="2025-11-25 20:42:52.797741037 +0000 UTC m=+953.910274439" Nov 25 20:42:52 crc kubenswrapper[4983]: I1125 20:42:52.923961 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:52 crc kubenswrapper[4983]: E1125 20:42:52.924245 4983 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 20:42:52 crc kubenswrapper[4983]: E1125 20:42:52.924261 4983 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 20:42:52 crc kubenswrapper[4983]: E1125 20:42:52.924320 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift podName:214288a7-ce6d-4844-b3f6-8ab78b7e1b54 nodeName:}" failed. No retries permitted until 2025-11-25 20:42:56.924299919 +0000 UTC m=+958.036833311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift") pod "swift-storage-0" (UID: "214288a7-ce6d-4844-b3f6-8ab78b7e1b54") : configmap "swift-ring-files" not found Nov 25 20:42:53 crc kubenswrapper[4983]: W1125 20:42:53.409260 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb98ad7c_5111_4876_b5ca_4196c92b2cce.slice/crio-bcb414a7d86380947f0fa532c864bbbd7037658bafe1a244c2ee2c0060beae52 WatchSource:0}: Error finding container bcb414a7d86380947f0fa532c864bbbd7037658bafe1a244c2ee2c0060beae52: Status 404 returned error can't find the container with id bcb414a7d86380947f0fa532c864bbbd7037658bafe1a244c2ee2c0060beae52 Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.514601 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.522747 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.539537 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.548155 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn6wn\" (UniqueName: \"kubernetes.io/projected/55267844-5a91-44d3-b4bc-6292f74eb7bb-kube-api-access-kn6wn\") pod \"55267844-5a91-44d3-b4bc-6292f74eb7bb\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.548199 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-operator-scripts\") pod \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.548265 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fhls\" (UniqueName: \"kubernetes.io/projected/9e1808b9-63ae-48e4-8516-a119424817b7-kube-api-access-9fhls\") pod \"9e1808b9-63ae-48e4-8516-a119424817b7\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.548300 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blqgl\" (UniqueName: \"kubernetes.io/projected/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-kube-api-access-blqgl\") pod \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\" (UID: \"4f4e8454-9ea6-414b-83a4-6c8a16cf983e\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.548333 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1808b9-63ae-48e4-8516-a119424817b7-operator-scripts\") pod \"9e1808b9-63ae-48e4-8516-a119424817b7\" (UID: \"9e1808b9-63ae-48e4-8516-a119424817b7\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.548381 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55267844-5a91-44d3-b4bc-6292f74eb7bb-operator-scripts\") pod \"55267844-5a91-44d3-b4bc-6292f74eb7bb\" (UID: \"55267844-5a91-44d3-b4bc-6292f74eb7bb\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.549282 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55267844-5a91-44d3-b4bc-6292f74eb7bb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55267844-5a91-44d3-b4bc-6292f74eb7bb" (UID: "55267844-5a91-44d3-b4bc-6292f74eb7bb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.557081 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f4e8454-9ea6-414b-83a4-6c8a16cf983e" (UID: "4f4e8454-9ea6-414b-83a4-6c8a16cf983e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.558131 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55267844-5a91-44d3-b4bc-6292f74eb7bb-kube-api-access-kn6wn" (OuterVolumeSpecName: "kube-api-access-kn6wn") pod "55267844-5a91-44d3-b4bc-6292f74eb7bb" (UID: "55267844-5a91-44d3-b4bc-6292f74eb7bb"). InnerVolumeSpecName "kube-api-access-kn6wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.558826 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-kube-api-access-blqgl" (OuterVolumeSpecName: "kube-api-access-blqgl") pod "4f4e8454-9ea6-414b-83a4-6c8a16cf983e" (UID: "4f4e8454-9ea6-414b-83a4-6c8a16cf983e"). InnerVolumeSpecName "kube-api-access-blqgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.559721 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e1808b9-63ae-48e4-8516-a119424817b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e1808b9-63ae-48e4-8516-a119424817b7" (UID: "9e1808b9-63ae-48e4-8516-a119424817b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.561016 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e1808b9-63ae-48e4-8516-a119424817b7-kube-api-access-9fhls" (OuterVolumeSpecName: "kube-api-access-9fhls") pod "9e1808b9-63ae-48e4-8516-a119424817b7" (UID: "9e1808b9-63ae-48e4-8516-a119424817b7"). InnerVolumeSpecName "kube-api-access-9fhls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.561463 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.616279 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" path="/var/lib/kubelet/pods/e2f07d9e-1a48-4e50-9d3b-93aec12276f5/volumes" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649193 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bkk\" (UniqueName: \"kubernetes.io/projected/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-kube-api-access-86bkk\") pod \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649396 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-operator-scripts\") pod \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\" (UID: \"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed\") " Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649750 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fhls\" (UniqueName: \"kubernetes.io/projected/9e1808b9-63ae-48e4-8516-a119424817b7-kube-api-access-9fhls\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649819 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blqgl\" (UniqueName: \"kubernetes.io/projected/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-kube-api-access-blqgl\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649887 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e1808b9-63ae-48e4-8516-a119424817b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649952 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55267844-5a91-44d3-b4bc-6292f74eb7bb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.650006 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn6wn\" (UniqueName: \"kubernetes.io/projected/55267844-5a91-44d3-b4bc-6292f74eb7bb-kube-api-access-kn6wn\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.650067 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4e8454-9ea6-414b-83a4-6c8a16cf983e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.649912 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" (UID: "2c3cf6a7-c209-47b7-81a7-95e076a0e4ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.653136 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-kube-api-access-86bkk" (OuterVolumeSpecName: "kube-api-access-86bkk") pod "2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" (UID: "2c3cf6a7-c209-47b7-81a7-95e076a0e4ed"). InnerVolumeSpecName "kube-api-access-86bkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.751873 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bkk\" (UniqueName: \"kubernetes.io/projected/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-kube-api-access-86bkk\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.751927 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.765627 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s9cqc" event={"ID":"55267844-5a91-44d3-b4bc-6292f74eb7bb","Type":"ContainerDied","Data":"b57d06c7047d5d679fc75933070a895678ce2d36f143fb394267f798ade333d5"} Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.765662 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s9cqc" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.765688 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b57d06c7047d5d679fc75933070a895678ce2d36f143fb394267f798ade333d5" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.769093 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8345-account-create-update-pvqgp" event={"ID":"fb98ad7c-5111-4876-b5ca-4196c92b2cce","Type":"ContainerStarted","Data":"bcb414a7d86380947f0fa532c864bbbd7037658bafe1a244c2ee2c0060beae52"} Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.771234 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-fz9jq" event={"ID":"4f4e8454-9ea6-414b-83a4-6c8a16cf983e","Type":"ContainerDied","Data":"edcdc21970b800bf8b3c2d78977936f963a90967d88cc093b8d1fd9e4f7b6d3c"} Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.771372 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edcdc21970b800bf8b3c2d78977936f963a90967d88cc093b8d1fd9e4f7b6d3c" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.771312 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-fz9jq" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.772466 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-89aa-account-create-update-gpfwm" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.772481 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-89aa-account-create-update-gpfwm" event={"ID":"2c3cf6a7-c209-47b7-81a7-95e076a0e4ed","Type":"ContainerDied","Data":"a8eeec860effe48acb9bbc0f56b37297416eb006ebe721556e73160b0f1a5e4e"} Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.772518 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8eeec860effe48acb9bbc0f56b37297416eb006ebe721556e73160b0f1a5e4e" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.775680 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c89-account-create-update-5nbdz" event={"ID":"9e1808b9-63ae-48e4-8516-a119424817b7","Type":"ContainerDied","Data":"ad9f808fd596463af8d3b1bad0fd0d5f20f75bf2580a4786547d2eb60f4d05ad"} Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.775726 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad9f808fd596463af8d3b1bad0fd0d5f20f75bf2580a4786547d2eb60f4d05ad" Nov 25 20:42:53 crc kubenswrapper[4983]: I1125 20:42:53.775757 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c89-account-create-update-5nbdz" Nov 25 20:42:54 crc kubenswrapper[4983]: I1125 20:42:54.789982 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c2ld6" event={"ID":"03c578b0-726b-457d-a2b4-a3582ea1704c","Type":"ContainerDied","Data":"969733a08ed135c3c45cf0feca2244f6ade5409a77b10c93f1abecd5d89b0968"} Nov 25 20:42:54 crc kubenswrapper[4983]: I1125 20:42:54.790420 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="969733a08ed135c3c45cf0feca2244f6ade5409a77b10c93f1abecd5d89b0968" Nov 25 20:42:54 crc kubenswrapper[4983]: I1125 20:42:54.938509 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.080493 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03c578b0-726b-457d-a2b4-a3582ea1704c-operator-scripts\") pod \"03c578b0-726b-457d-a2b4-a3582ea1704c\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.081013 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqqn7\" (UniqueName: \"kubernetes.io/projected/03c578b0-726b-457d-a2b4-a3582ea1704c-kube-api-access-zqqn7\") pod \"03c578b0-726b-457d-a2b4-a3582ea1704c\" (UID: \"03c578b0-726b-457d-a2b4-a3582ea1704c\") " Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.081080 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c578b0-726b-457d-a2b4-a3582ea1704c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "03c578b0-726b-457d-a2b4-a3582ea1704c" (UID: "03c578b0-726b-457d-a2b4-a3582ea1704c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.081581 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/03c578b0-726b-457d-a2b4-a3582ea1704c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.087962 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c578b0-726b-457d-a2b4-a3582ea1704c-kube-api-access-zqqn7" (OuterVolumeSpecName: "kube-api-access-zqqn7") pod "03c578b0-726b-457d-a2b4-a3582ea1704c" (UID: "03c578b0-726b-457d-a2b4-a3582ea1704c"). InnerVolumeSpecName "kube-api-access-zqqn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.182865 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqqn7\" (UniqueName: \"kubernetes.io/projected/03c578b0-726b-457d-a2b4-a3582ea1704c-kube-api-access-zqqn7\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.810435 4983 generic.go:334] "Generic (PLEG): container finished" podID="fb98ad7c-5111-4876-b5ca-4196c92b2cce" containerID="03b1d1ffbb3fd44693d9596d39664bb323bb51a7a7e732c042877278f2b699aa" exitCode=0 Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.810521 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8345-account-create-update-pvqgp" event={"ID":"fb98ad7c-5111-4876-b5ca-4196c92b2cce","Type":"ContainerDied","Data":"03b1d1ffbb3fd44693d9596d39664bb323bb51a7a7e732c042877278f2b699aa"} Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.812516 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c2ld6" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.813379 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rln2d" event={"ID":"9681d7cb-ab2c-4458-bc07-a7d278f16fd2","Type":"ContainerStarted","Data":"fdc542dce2310a623aa1a025e4eaacd16aa60a66f9d2d09c752c824503cb355e"} Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.981253 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 20:42:55 crc kubenswrapper[4983]: I1125 20:42:55.983881 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-rln2d" podStartSLOduration=2.716761876 podStartE2EDuration="6.983866049s" podCreationTimestamp="2025-11-25 20:42:49 +0000 UTC" firstStartedPulling="2025-11-25 20:42:50.483724258 +0000 UTC m=+951.596257650" lastFinishedPulling="2025-11-25 20:42:54.750828421 +0000 UTC m=+955.863361823" observedRunningTime="2025-11-25 20:42:55.869246884 +0000 UTC m=+956.981780326" watchObservedRunningTime="2025-11-25 20:42:55.983866049 +0000 UTC m=+957.096399441" Nov 25 20:42:56 crc kubenswrapper[4983]: I1125 20:42:56.189450 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-cg64x" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Nov 25 20:42:56 crc kubenswrapper[4983]: I1125 20:42:56.559117 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.019023 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:42:57 crc kubenswrapper[4983]: E1125 20:42:57.019337 4983 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 20:42:57 crc kubenswrapper[4983]: E1125 20:42:57.019386 4983 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 20:42:57 crc kubenswrapper[4983]: E1125 20:42:57.019481 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift podName:214288a7-ce6d-4844-b3f6-8ab78b7e1b54 nodeName:}" failed. No retries permitted until 2025-11-25 20:43:05.019455009 +0000 UTC m=+966.131988411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift") pod "swift-storage-0" (UID: "214288a7-ce6d-4844-b3f6-8ab78b7e1b54") : configmap "swift-ring-files" not found Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.195279 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.324989 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm45d\" (UniqueName: \"kubernetes.io/projected/fb98ad7c-5111-4876-b5ca-4196c92b2cce-kube-api-access-dm45d\") pod \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.325037 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98ad7c-5111-4876-b5ca-4196c92b2cce-operator-scripts\") pod \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\" (UID: \"fb98ad7c-5111-4876-b5ca-4196c92b2cce\") " Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.325905 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb98ad7c-5111-4876-b5ca-4196c92b2cce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb98ad7c-5111-4876-b5ca-4196c92b2cce" (UID: "fb98ad7c-5111-4876-b5ca-4196c92b2cce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.331424 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb98ad7c-5111-4876-b5ca-4196c92b2cce-kube-api-access-dm45d" (OuterVolumeSpecName: "kube-api-access-dm45d") pod "fb98ad7c-5111-4876-b5ca-4196c92b2cce" (UID: "fb98ad7c-5111-4876-b5ca-4196c92b2cce"). InnerVolumeSpecName "kube-api-access-dm45d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.427352 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm45d\" (UniqueName: \"kubernetes.io/projected/fb98ad7c-5111-4876-b5ca-4196c92b2cce-kube-api-access-dm45d\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.427387 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb98ad7c-5111-4876-b5ca-4196c92b2cce-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.832052 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8345-account-create-update-pvqgp" event={"ID":"fb98ad7c-5111-4876-b5ca-4196c92b2cce","Type":"ContainerDied","Data":"bcb414a7d86380947f0fa532c864bbbd7037658bafe1a244c2ee2c0060beae52"} Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.832101 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcb414a7d86380947f0fa532c864bbbd7037658bafe1a244c2ee2c0060beae52" Nov 25 20:42:57 crc kubenswrapper[4983]: I1125 20:42:57.832164 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8345-account-create-update-pvqgp" Nov 25 20:42:58 crc kubenswrapper[4983]: I1125 20:42:58.319726 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:42:58 crc kubenswrapper[4983]: I1125 20:42:58.432411 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wkqm"] Nov 25 20:42:58 crc kubenswrapper[4983]: I1125 20:42:58.432760 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerName="dnsmasq-dns" containerID="cri-o://9f301d23f12460edabe2161b03afd5f1c0115d0d5d5f69dd4928cfc1265f6504" gracePeriod=10 Nov 25 20:42:58 crc kubenswrapper[4983]: I1125 20:42:58.843777 4983 generic.go:334] "Generic (PLEG): container finished" podID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerID="9f301d23f12460edabe2161b03afd5f1c0115d0d5d5f69dd4928cfc1265f6504" exitCode=0 Nov 25 20:42:58 crc kubenswrapper[4983]: I1125 20:42:58.843823 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" event={"ID":"8c56e13a-8335-4d41-9bb3-5894d35814dd","Type":"ContainerDied","Data":"9f301d23f12460edabe2161b03afd5f1c0115d0d5d5f69dd4928cfc1265f6504"} Nov 25 20:42:58 crc kubenswrapper[4983]: I1125 20:42:58.972612 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.065688 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-sb\") pod \"8c56e13a-8335-4d41-9bb3-5894d35814dd\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.065733 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-nb\") pod \"8c56e13a-8335-4d41-9bb3-5894d35814dd\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.065830 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-config\") pod \"8c56e13a-8335-4d41-9bb3-5894d35814dd\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.065864 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-dns-svc\") pod \"8c56e13a-8335-4d41-9bb3-5894d35814dd\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.065890 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv4s4\" (UniqueName: \"kubernetes.io/projected/8c56e13a-8335-4d41-9bb3-5894d35814dd-kube-api-access-pv4s4\") pod \"8c56e13a-8335-4d41-9bb3-5894d35814dd\" (UID: \"8c56e13a-8335-4d41-9bb3-5894d35814dd\") " Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.087817 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c56e13a-8335-4d41-9bb3-5894d35814dd-kube-api-access-pv4s4" (OuterVolumeSpecName: "kube-api-access-pv4s4") pod "8c56e13a-8335-4d41-9bb3-5894d35814dd" (UID: "8c56e13a-8335-4d41-9bb3-5894d35814dd"). InnerVolumeSpecName "kube-api-access-pv4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.108632 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c56e13a-8335-4d41-9bb3-5894d35814dd" (UID: "8c56e13a-8335-4d41-9bb3-5894d35814dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.110997 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c56e13a-8335-4d41-9bb3-5894d35814dd" (UID: "8c56e13a-8335-4d41-9bb3-5894d35814dd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.120321 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c56e13a-8335-4d41-9bb3-5894d35814dd" (UID: "8c56e13a-8335-4d41-9bb3-5894d35814dd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.120713 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-config" (OuterVolumeSpecName: "config") pod "8c56e13a-8335-4d41-9bb3-5894d35814dd" (UID: "8c56e13a-8335-4d41-9bb3-5894d35814dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.167623 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.167660 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv4s4\" (UniqueName: \"kubernetes.io/projected/8c56e13a-8335-4d41-9bb3-5894d35814dd-kube-api-access-pv4s4\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.167675 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.167688 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.167704 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c56e13a-8335-4d41-9bb3-5894d35814dd-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.851718 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" event={"ID":"8c56e13a-8335-4d41-9bb3-5894d35814dd","Type":"ContainerDied","Data":"6ede1fa9c4f2b1201f275633a62bc90096b0c274cab72bff923a5db8ace220c2"} Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.852069 4983 scope.go:117] "RemoveContainer" containerID="9f301d23f12460edabe2161b03afd5f1c0115d0d5d5f69dd4928cfc1265f6504" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.852223 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7wkqm" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.885994 4983 scope.go:117] "RemoveContainer" containerID="91cf899d5e69fe256f97e8cee651336a5d8600a69a66b58fe1c970be0aa7ae28" Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.887226 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wkqm"] Nov 25 20:42:59 crc kubenswrapper[4983]: I1125 20:42:59.897112 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7wkqm"] Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.560136 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qcbkt"] Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561471 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerName="dnsmasq-dns" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561490 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerName="dnsmasq-dns" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561502 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55267844-5a91-44d3-b4bc-6292f74eb7bb" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561509 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="55267844-5a91-44d3-b4bc-6292f74eb7bb" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561523 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerName="init" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561528 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerName="init" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561541 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c578b0-726b-457d-a2b4-a3582ea1704c" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561546 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c578b0-726b-457d-a2b4-a3582ea1704c" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561576 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb98ad7c-5111-4876-b5ca-4196c92b2cce" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561581 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb98ad7c-5111-4876-b5ca-4196c92b2cce" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561594 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4e8454-9ea6-414b-83a4-6c8a16cf983e" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561600 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4e8454-9ea6-414b-83a4-6c8a16cf983e" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561613 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="init" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561619 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="init" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561631 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e1808b9-63ae-48e4-8516-a119424817b7" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561637 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e1808b9-63ae-48e4-8516-a119424817b7" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561646 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561652 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: E1125 20:43:01.561687 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="dnsmasq-dns" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561694 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="dnsmasq-dns" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561868 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c578b0-726b-457d-a2b4-a3582ea1704c" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561884 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561893 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb98ad7c-5111-4876-b5ca-4196c92b2cce" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561901 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f07d9e-1a48-4e50-9d3b-93aec12276f5" containerName="dnsmasq-dns" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561945 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" containerName="dnsmasq-dns" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561953 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="55267844-5a91-44d3-b4bc-6292f74eb7bb" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561969 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e1808b9-63ae-48e4-8516-a119424817b7" containerName="mariadb-account-create-update" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.561979 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4e8454-9ea6-414b-83a4-6c8a16cf983e" containerName="mariadb-database-create" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.562579 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.566605 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.567305 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dkrf2" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.579777 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qcbkt"] Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.634139 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c56e13a-8335-4d41-9bb3-5894d35814dd" path="/var/lib/kubelet/pods/8c56e13a-8335-4d41-9bb3-5894d35814dd/volumes" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.728715 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-db-sync-config-data\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.728926 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-config-data\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.729062 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-combined-ca-bundle\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.729148 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7txk2\" (UniqueName: \"kubernetes.io/projected/a1deb958-bfd0-4b82-bbf7-823375a53e6b-kube-api-access-7txk2\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.830748 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-combined-ca-bundle\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.830843 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7txk2\" (UniqueName: \"kubernetes.io/projected/a1deb958-bfd0-4b82-bbf7-823375a53e6b-kube-api-access-7txk2\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.830873 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-db-sync-config-data\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.830922 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-config-data\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.839054 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-db-sync-config-data\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.840129 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-config-data\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.840179 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-combined-ca-bundle\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.855088 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7txk2\" (UniqueName: \"kubernetes.io/projected/a1deb958-bfd0-4b82-bbf7-823375a53e6b-kube-api-access-7txk2\") pod \"glance-db-sync-qcbkt\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.870116 4983 generic.go:334] "Generic (PLEG): container finished" podID="9681d7cb-ab2c-4458-bc07-a7d278f16fd2" containerID="fdc542dce2310a623aa1a025e4eaacd16aa60a66f9d2d09c752c824503cb355e" exitCode=0 Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.870203 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rln2d" event={"ID":"9681d7cb-ab2c-4458-bc07-a7d278f16fd2","Type":"ContainerDied","Data":"fdc542dce2310a623aa1a025e4eaacd16aa60a66f9d2d09c752c824503cb355e"} Nov 25 20:43:01 crc kubenswrapper[4983]: I1125 20:43:01.916537 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:02 crc kubenswrapper[4983]: I1125 20:43:02.504280 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qcbkt"] Nov 25 20:43:02 crc kubenswrapper[4983]: I1125 20:43:02.881745 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qcbkt" event={"ID":"a1deb958-bfd0-4b82-bbf7-823375a53e6b","Type":"ContainerStarted","Data":"b3b05d2a804d221299358320c9fd37f12ed4e29956e37089213aa301097612d6"} Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.270355 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.458699 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-etc-swift\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.458752 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-swiftconf\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.458812 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-combined-ca-bundle\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.458929 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-scripts\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.458952 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-dispersionconf\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.459051 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r82q\" (UniqueName: \"kubernetes.io/projected/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-kube-api-access-8r82q\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.459087 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-ring-data-devices\") pod \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\" (UID: \"9681d7cb-ab2c-4458-bc07-a7d278f16fd2\") " Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.460034 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.460090 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.465652 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-kube-api-access-8r82q" (OuterVolumeSpecName: "kube-api-access-8r82q") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "kube-api-access-8r82q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.484974 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.486904 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-scripts" (OuterVolumeSpecName: "scripts") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.487054 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.489792 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9681d7cb-ab2c-4458-bc07-a7d278f16fd2" (UID: "9681d7cb-ab2c-4458-bc07-a7d278f16fd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561200 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561243 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561255 4983 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561266 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r82q\" (UniqueName: \"kubernetes.io/projected/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-kube-api-access-8r82q\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561281 4983 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561293 4983 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.561304 4983 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9681d7cb-ab2c-4458-bc07-a7d278f16fd2-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.896603 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-rln2d" Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.898001 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-rln2d" event={"ID":"9681d7cb-ab2c-4458-bc07-a7d278f16fd2","Type":"ContainerDied","Data":"ad1214c5a956290ecc9096e242ceb0149b1df92c32729b30dcc0052f9113a2ba"} Nov 25 20:43:03 crc kubenswrapper[4983]: I1125 20:43:03.898055 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad1214c5a956290ecc9096e242ceb0149b1df92c32729b30dcc0052f9113a2ba" Nov 25 20:43:05 crc kubenswrapper[4983]: I1125 20:43:05.090824 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:43:05 crc kubenswrapper[4983]: I1125 20:43:05.123800 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/214288a7-ce6d-4844-b3f6-8ab78b7e1b54-etc-swift\") pod \"swift-storage-0\" (UID: \"214288a7-ce6d-4844-b3f6-8ab78b7e1b54\") " pod="openstack/swift-storage-0" Nov 25 20:43:05 crc kubenswrapper[4983]: I1125 20:43:05.306788 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 20:43:05 crc kubenswrapper[4983]: I1125 20:43:05.886316 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 20:43:05 crc kubenswrapper[4983]: W1125 20:43:05.889441 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214288a7_ce6d_4844_b3f6_8ab78b7e1b54.slice/crio-6b9801c7d23d62b3eb3ec00a283caffbae2849a7951e7044de32663fa973a8ba WatchSource:0}: Error finding container 6b9801c7d23d62b3eb3ec00a283caffbae2849a7951e7044de32663fa973a8ba: Status 404 returned error can't find the container with id 6b9801c7d23d62b3eb3ec00a283caffbae2849a7951e7044de32663fa973a8ba Nov 25 20:43:05 crc kubenswrapper[4983]: I1125 20:43:05.915508 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"6b9801c7d23d62b3eb3ec00a283caffbae2849a7951e7044de32663fa973a8ba"} Nov 25 20:43:06 crc kubenswrapper[4983]: I1125 20:43:06.757414 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fgn5f" podUID="cd8b1052-9050-4771-8be4-3138d9c54d62" containerName="ovn-controller" probeResult="failure" output=< Nov 25 20:43:06 crc kubenswrapper[4983]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 20:43:06 crc kubenswrapper[4983]: > Nov 25 20:43:06 crc kubenswrapper[4983]: I1125 20:43:06.815567 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:43:06 crc kubenswrapper[4983]: I1125 20:43:06.823866 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-47bp7" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.031973 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fgn5f-config-f6wpv"] Nov 25 20:43:07 crc kubenswrapper[4983]: E1125 20:43:07.032629 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9681d7cb-ab2c-4458-bc07-a7d278f16fd2" containerName="swift-ring-rebalance" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.032657 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9681d7cb-ab2c-4458-bc07-a7d278f16fd2" containerName="swift-ring-rebalance" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.032976 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="9681d7cb-ab2c-4458-bc07-a7d278f16fd2" containerName="swift-ring-rebalance" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.034099 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.040329 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.049410 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fgn5f-config-f6wpv"] Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.235715 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run-ovn\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.235770 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrgw\" (UniqueName: \"kubernetes.io/projected/c180597f-178d-412c-a661-0b63550e48b8-kube-api-access-bqrgw\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.235800 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-scripts\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.235872 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-log-ovn\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.236046 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-additional-scripts\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.236067 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.337848 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-additional-scripts\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.337904 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.337955 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run-ovn\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.337981 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrgw\" (UniqueName: \"kubernetes.io/projected/c180597f-178d-412c-a661-0b63550e48b8-kube-api-access-bqrgw\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.338007 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-scripts\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.338034 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-log-ovn\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.338481 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-log-ovn\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.338569 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.338612 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run-ovn\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.338796 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-additional-scripts\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.341670 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-scripts\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.362207 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrgw\" (UniqueName: \"kubernetes.io/projected/c180597f-178d-412c-a661-0b63550e48b8-kube-api-access-bqrgw\") pod \"ovn-controller-fgn5f-config-f6wpv\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.415756 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.941491 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"0792de9f333a05b9a43755a65c1f4a44c6fda6cd297505485fe9f656d8d9a90c"} Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.941594 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"c287f0fe399dc3d538f960de24923f069680c4e09e7aa5e9fcde6db62d9156a2"} Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.951333 4983 generic.go:334] "Generic (PLEG): container finished" podID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerID="ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227" exitCode=0 Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.951425 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a7aa78f0-48cd-4845-8a44-52fb63183dff","Type":"ContainerDied","Data":"ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227"} Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.957829 4983 generic.go:334] "Generic (PLEG): container finished" podID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerID="82321d33208ab6fde63d4a7f69525aa9078ba24736057f6bf87559c7b2c6d966" exitCode=0 Nov 25 20:43:07 crc kubenswrapper[4983]: I1125 20:43:07.957909 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90","Type":"ContainerDied","Data":"82321d33208ab6fde63d4a7f69525aa9078ba24736057f6bf87559c7b2c6d966"} Nov 25 20:43:11 crc kubenswrapper[4983]: I1125 20:43:11.745275 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fgn5f" podUID="cd8b1052-9050-4771-8be4-3138d9c54d62" containerName="ovn-controller" probeResult="failure" output=< Nov 25 20:43:11 crc kubenswrapper[4983]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 20:43:11 crc kubenswrapper[4983]: > Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.030224 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"9fe2244ee66c6c9b06098a915c630752b87064ae9ab04c553547ae1ae1647482"} Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.032714 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a7aa78f0-48cd-4845-8a44-52fb63183dff","Type":"ContainerStarted","Data":"845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74"} Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.032964 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.036252 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90","Type":"ContainerStarted","Data":"b99c1995b8ed69a77f165df12c8b5ed28d5ffae10924757fba6e34bd06d6bfe1"} Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.036457 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.039566 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fgn5f-config-f6wpv"] Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.069159 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.153957259 podStartE2EDuration="1m3.069134895s" podCreationTimestamp="2025-11-25 20:42:11 +0000 UTC" firstStartedPulling="2025-11-25 20:42:25.358732317 +0000 UTC m=+926.471265729" lastFinishedPulling="2025-11-25 20:42:33.273909973 +0000 UTC m=+934.386443365" observedRunningTime="2025-11-25 20:43:14.062506019 +0000 UTC m=+975.175039411" watchObservedRunningTime="2025-11-25 20:43:14.069134895 +0000 UTC m=+975.181668287" Nov 25 20:43:14 crc kubenswrapper[4983]: I1125 20:43:14.098970 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=54.554188629 podStartE2EDuration="1m3.098946354s" podCreationTimestamp="2025-11-25 20:42:11 +0000 UTC" firstStartedPulling="2025-11-25 20:42:24.513719463 +0000 UTC m=+925.626252865" lastFinishedPulling="2025-11-25 20:42:33.058477198 +0000 UTC m=+934.171010590" observedRunningTime="2025-11-25 20:43:14.096270793 +0000 UTC m=+975.208804195" watchObservedRunningTime="2025-11-25 20:43:14.098946354 +0000 UTC m=+975.211479746" Nov 25 20:43:15 crc kubenswrapper[4983]: I1125 20:43:15.052195 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qcbkt" event={"ID":"a1deb958-bfd0-4b82-bbf7-823375a53e6b","Type":"ContainerStarted","Data":"81dd2f779563d79c57b08ee10b6e08f743ae76217620cd159fa2338c43c05235"} Nov 25 20:43:15 crc kubenswrapper[4983]: I1125 20:43:15.055507 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"6bd25cf9de4c6555ad724292f21457e3550f9f6a44ffc961031c734b1ecbc681"} Nov 25 20:43:15 crc kubenswrapper[4983]: I1125 20:43:15.062413 4983 generic.go:334] "Generic (PLEG): container finished" podID="c180597f-178d-412c-a661-0b63550e48b8" containerID="8cb540a8d12319e78e1bc0401fd3e943854bed3121d8bff8563ba5766b6eadcf" exitCode=0 Nov 25 20:43:15 crc kubenswrapper[4983]: I1125 20:43:15.062506 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fgn5f-config-f6wpv" event={"ID":"c180597f-178d-412c-a661-0b63550e48b8","Type":"ContainerDied","Data":"8cb540a8d12319e78e1bc0401fd3e943854bed3121d8bff8563ba5766b6eadcf"} Nov 25 20:43:15 crc kubenswrapper[4983]: I1125 20:43:15.062748 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fgn5f-config-f6wpv" event={"ID":"c180597f-178d-412c-a661-0b63550e48b8","Type":"ContainerStarted","Data":"b131801e05475160a14af19f50e10053729065e3b6782174175f94101ab3b0f2"} Nov 25 20:43:15 crc kubenswrapper[4983]: I1125 20:43:15.078279 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qcbkt" podStartSLOduration=2.879087496 podStartE2EDuration="14.078221863s" podCreationTimestamp="2025-11-25 20:43:01 +0000 UTC" firstStartedPulling="2025-11-25 20:43:02.512013959 +0000 UTC m=+963.624547361" lastFinishedPulling="2025-11-25 20:43:13.711148336 +0000 UTC m=+974.823681728" observedRunningTime="2025-11-25 20:43:15.076757334 +0000 UTC m=+976.189290746" watchObservedRunningTime="2025-11-25 20:43:15.078221863 +0000 UTC m=+976.190755295" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.073901 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"a527ec68fd9953a9b8e1403e1448479822e8da5698a16a49ca51a64514be07a9"} Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.074487 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"8ad37f6d2c821cd1208b9d40fdc317698db049a5695f3cb5caa61b6fac442b12"} Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.074509 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"88be9be37171a6e99ebddbb288c62d49190c58be7204b8f29094bf1f3b8d7b37"} Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.074523 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"6214a429fbd9e237fafd4090f9af86ca2ae4acbb02773a532b716fa83e066677"} Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.434480 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.525607 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-scripts\") pod \"c180597f-178d-412c-a661-0b63550e48b8\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.525650 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-log-ovn\") pod \"c180597f-178d-412c-a661-0b63550e48b8\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.525673 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqrgw\" (UniqueName: \"kubernetes.io/projected/c180597f-178d-412c-a661-0b63550e48b8-kube-api-access-bqrgw\") pod \"c180597f-178d-412c-a661-0b63550e48b8\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.525813 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run\") pod \"c180597f-178d-412c-a661-0b63550e48b8\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.525896 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run-ovn\") pod \"c180597f-178d-412c-a661-0b63550e48b8\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.525929 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-additional-scripts\") pod \"c180597f-178d-412c-a661-0b63550e48b8\" (UID: \"c180597f-178d-412c-a661-0b63550e48b8\") " Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.527154 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c180597f-178d-412c-a661-0b63550e48b8" (UID: "c180597f-178d-412c-a661-0b63550e48b8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.527292 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c180597f-178d-412c-a661-0b63550e48b8" (UID: "c180597f-178d-412c-a661-0b63550e48b8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.527343 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run" (OuterVolumeSpecName: "var-run") pod "c180597f-178d-412c-a661-0b63550e48b8" (UID: "c180597f-178d-412c-a661-0b63550e48b8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.527363 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c180597f-178d-412c-a661-0b63550e48b8" (UID: "c180597f-178d-412c-a661-0b63550e48b8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.527658 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-scripts" (OuterVolumeSpecName: "scripts") pod "c180597f-178d-412c-a661-0b63550e48b8" (UID: "c180597f-178d-412c-a661-0b63550e48b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.536729 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c180597f-178d-412c-a661-0b63550e48b8-kube-api-access-bqrgw" (OuterVolumeSpecName: "kube-api-access-bqrgw") pod "c180597f-178d-412c-a661-0b63550e48b8" (UID: "c180597f-178d-412c-a661-0b63550e48b8"). InnerVolumeSpecName "kube-api-access-bqrgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.628421 4983 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.628462 4983 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.628488 4983 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.628505 4983 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c180597f-178d-412c-a661-0b63550e48b8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.628516 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c180597f-178d-412c-a661-0b63550e48b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.628528 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqrgw\" (UniqueName: \"kubernetes.io/projected/c180597f-178d-412c-a661-0b63550e48b8-kube-api-access-bqrgw\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:16 crc kubenswrapper[4983]: I1125 20:43:16.750868 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fgn5f" Nov 25 20:43:17 crc kubenswrapper[4983]: I1125 20:43:17.096507 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fgn5f-config-f6wpv" event={"ID":"c180597f-178d-412c-a661-0b63550e48b8","Type":"ContainerDied","Data":"b131801e05475160a14af19f50e10053729065e3b6782174175f94101ab3b0f2"} Nov 25 20:43:17 crc kubenswrapper[4983]: I1125 20:43:17.096597 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b131801e05475160a14af19f50e10053729065e3b6782174175f94101ab3b0f2" Nov 25 20:43:17 crc kubenswrapper[4983]: I1125 20:43:17.096543 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fgn5f-config-f6wpv" Nov 25 20:43:17 crc kubenswrapper[4983]: I1125 20:43:17.575402 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fgn5f-config-f6wpv"] Nov 25 20:43:17 crc kubenswrapper[4983]: I1125 20:43:17.583778 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fgn5f-config-f6wpv"] Nov 25 20:43:17 crc kubenswrapper[4983]: I1125 20:43:17.619222 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c180597f-178d-412c-a661-0b63550e48b8" path="/var/lib/kubelet/pods/c180597f-178d-412c-a661-0b63550e48b8/volumes" Nov 25 20:43:18 crc kubenswrapper[4983]: I1125 20:43:18.108179 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"b151271a5894dc3bab0d2d6cf9cc60ee35f43542d3d57476ec099e840d13b57a"} Nov 25 20:43:18 crc kubenswrapper[4983]: I1125 20:43:18.108545 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"629f433580128ad5829e36c41720ee69bcdb5666115fc9a3752b733df644fbfc"} Nov 25 20:43:18 crc kubenswrapper[4983]: I1125 20:43:18.108576 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"00ae0128ef43442f441ff5a017a0fec6cce4ca87b58c4ac0d2167b5d18562a37"} Nov 25 20:43:18 crc kubenswrapper[4983]: I1125 20:43:18.108587 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"a9369c14038e90737ef8b2c720ec8627db3880d952f750b7b602605968132133"} Nov 25 20:43:18 crc kubenswrapper[4983]: I1125 20:43:18.108597 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"a8102e647862de2712465ac69b43ea41119efab97de72a66590a3caf505df1f2"} Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.123543 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"0b647b75837148acc25465cfaafdf0280b3d8216111084cac58323d8b66d647c"} Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.123612 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"214288a7-ce6d-4844-b3f6-8ab78b7e1b54","Type":"ContainerStarted","Data":"4b4c95294be9db621f487da6137fdd34f3d69b7558e0b2fa789552519a2a6504"} Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.171536 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.972541751 podStartE2EDuration="31.171511984s" podCreationTimestamp="2025-11-25 20:42:48 +0000 UTC" firstStartedPulling="2025-11-25 20:43:05.892077865 +0000 UTC m=+967.004611267" lastFinishedPulling="2025-11-25 20:43:17.091048108 +0000 UTC m=+978.203581500" observedRunningTime="2025-11-25 20:43:19.169969053 +0000 UTC m=+980.282502445" watchObservedRunningTime="2025-11-25 20:43:19.171511984 +0000 UTC m=+980.284045376" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.496738 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-wmzt8"] Nov 25 20:43:19 crc kubenswrapper[4983]: E1125 20:43:19.497530 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c180597f-178d-412c-a661-0b63550e48b8" containerName="ovn-config" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.497665 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="c180597f-178d-412c-a661-0b63550e48b8" containerName="ovn-config" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.497996 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="c180597f-178d-412c-a661-0b63550e48b8" containerName="ovn-config" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.499324 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.502447 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.515411 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-wmzt8"] Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.574163 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.574485 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4767m\" (UniqueName: \"kubernetes.io/projected/46474651-de69-46e5-af7e-f29b0eacf704-kube-api-access-4767m\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.574755 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.574974 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-config\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.575152 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-svc\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.575287 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.676624 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.676737 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-config\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.676797 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-svc\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.676817 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.676855 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.676875 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4767m\" (UniqueName: \"kubernetes.io/projected/46474651-de69-46e5-af7e-f29b0eacf704-kube-api-access-4767m\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.678104 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-config\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.678280 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-svc\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.678306 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.678526 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.679154 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.706439 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4767m\" (UniqueName: \"kubernetes.io/projected/46474651-de69-46e5-af7e-f29b0eacf704-kube-api-access-4767m\") pod \"dnsmasq-dns-764c5664d7-wmzt8\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:19 crc kubenswrapper[4983]: I1125 20:43:19.861990 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:20 crc kubenswrapper[4983]: I1125 20:43:20.352616 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-wmzt8"] Nov 25 20:43:21 crc kubenswrapper[4983]: I1125 20:43:21.146089 4983 generic.go:334] "Generic (PLEG): container finished" podID="46474651-de69-46e5-af7e-f29b0eacf704" containerID="c75c55f9a94049839f38c0efd0896cc0f3db878a93e74292c7f37cef6a145b38" exitCode=0 Nov 25 20:43:21 crc kubenswrapper[4983]: I1125 20:43:21.146211 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" event={"ID":"46474651-de69-46e5-af7e-f29b0eacf704","Type":"ContainerDied","Data":"c75c55f9a94049839f38c0efd0896cc0f3db878a93e74292c7f37cef6a145b38"} Nov 25 20:43:21 crc kubenswrapper[4983]: I1125 20:43:21.146900 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" event={"ID":"46474651-de69-46e5-af7e-f29b0eacf704","Type":"ContainerStarted","Data":"ce2741e3f3308d0fb187c4dd747aa7efbda1fcdc983b074c8b8782cb9c8502e6"} Nov 25 20:43:22 crc kubenswrapper[4983]: I1125 20:43:22.159191 4983 generic.go:334] "Generic (PLEG): container finished" podID="a1deb958-bfd0-4b82-bbf7-823375a53e6b" containerID="81dd2f779563d79c57b08ee10b6e08f743ae76217620cd159fa2338c43c05235" exitCode=0 Nov 25 20:43:22 crc kubenswrapper[4983]: I1125 20:43:22.159396 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qcbkt" event={"ID":"a1deb958-bfd0-4b82-bbf7-823375a53e6b","Type":"ContainerDied","Data":"81dd2f779563d79c57b08ee10b6e08f743ae76217620cd159fa2338c43c05235"} Nov 25 20:43:22 crc kubenswrapper[4983]: I1125 20:43:22.168805 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" event={"ID":"46474651-de69-46e5-af7e-f29b0eacf704","Type":"ContainerStarted","Data":"730fd13d57dbd6535c0db7c643d82e59791d7da78c563108aac3166b70f504ee"} Nov 25 20:43:22 crc kubenswrapper[4983]: I1125 20:43:22.170393 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:22 crc kubenswrapper[4983]: I1125 20:43:22.206295 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" podStartSLOduration=3.206265897 podStartE2EDuration="3.206265897s" podCreationTimestamp="2025-11-25 20:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:22.204740767 +0000 UTC m=+983.317274159" watchObservedRunningTime="2025-11-25 20:43:22.206265897 +0000 UTC m=+983.318799299" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.141738 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.249841 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.562529 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4vwwd"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.565787 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.581421 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vwwd"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.652445 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdx7\" (UniqueName: \"kubernetes.io/projected/e111c438-3824-4fac-9db8-ce47d6974e6d-kube-api-access-cwdx7\") pod \"cinder-db-create-4vwwd\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.652544 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e111c438-3824-4fac-9db8-ce47d6974e6d-operator-scripts\") pod \"cinder-db-create-4vwwd\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.695629 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-chrnq"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.697145 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.709601 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-acb0-account-create-update-n7fkg"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.710625 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.713131 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.730736 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-chrnq"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.739758 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-acb0-account-create-update-n7fkg"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.753915 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxvl\" (UniqueName: \"kubernetes.io/projected/5e138f00-2737-483b-ad2a-afd28c35e48b-kube-api-access-rfxvl\") pod \"barbican-acb0-account-create-update-n7fkg\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.753967 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8c2\" (UniqueName: \"kubernetes.io/projected/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-kube-api-access-rb8c2\") pod \"barbican-db-create-chrnq\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.753993 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-operator-scripts\") pod \"barbican-db-create-chrnq\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.754028 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e138f00-2737-483b-ad2a-afd28c35e48b-operator-scripts\") pod \"barbican-acb0-account-create-update-n7fkg\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.754054 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e111c438-3824-4fac-9db8-ce47d6974e6d-operator-scripts\") pod \"cinder-db-create-4vwwd\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.754133 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwdx7\" (UniqueName: \"kubernetes.io/projected/e111c438-3824-4fac-9db8-ce47d6974e6d-kube-api-access-cwdx7\") pod \"cinder-db-create-4vwwd\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.755289 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e111c438-3824-4fac-9db8-ce47d6974e6d-operator-scripts\") pod \"cinder-db-create-4vwwd\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.780705 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-643c-account-create-update-fdrdb"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.781698 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.787039 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwdx7\" (UniqueName: \"kubernetes.io/projected/e111c438-3824-4fac-9db8-ce47d6974e6d-kube-api-access-cwdx7\") pod \"cinder-db-create-4vwwd\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.788478 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-643c-account-create-update-fdrdb"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.789655 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.835518 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wbb46"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.838230 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.843106 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.843371 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.844363 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.844680 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4pfh" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.854962 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75fj\" (UniqueName: \"kubernetes.io/projected/e8b4648d-344a-4539-af4c-ddf7c8a23068-kube-api-access-v75fj\") pod \"cinder-643c-account-create-update-fdrdb\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855006 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxvl\" (UniqueName: \"kubernetes.io/projected/5e138f00-2737-483b-ad2a-afd28c35e48b-kube-api-access-rfxvl\") pod \"barbican-acb0-account-create-update-n7fkg\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855031 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8c2\" (UniqueName: \"kubernetes.io/projected/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-kube-api-access-rb8c2\") pod \"barbican-db-create-chrnq\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855049 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-operator-scripts\") pod \"barbican-db-create-chrnq\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855084 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e138f00-2737-483b-ad2a-afd28c35e48b-operator-scripts\") pod \"barbican-acb0-account-create-update-n7fkg\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855111 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-config-data\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855141 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b4648d-344a-4539-af4c-ddf7c8a23068-operator-scripts\") pod \"cinder-643c-account-create-update-fdrdb\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855243 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trnxh\" (UniqueName: \"kubernetes.io/projected/82817911-bcba-4071-a24b-c6dbb6f1973d-kube-api-access-trnxh\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.855302 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-combined-ca-bundle\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.856003 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wbb46"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.856472 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-operator-scripts\") pod \"barbican-db-create-chrnq\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.856964 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e138f00-2737-483b-ad2a-afd28c35e48b-operator-scripts\") pod \"barbican-acb0-account-create-update-n7fkg\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.865141 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.876071 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxvl\" (UniqueName: \"kubernetes.io/projected/5e138f00-2737-483b-ad2a-afd28c35e48b-kube-api-access-rfxvl\") pod \"barbican-acb0-account-create-update-n7fkg\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.878460 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8c2\" (UniqueName: \"kubernetes.io/projected/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-kube-api-access-rb8c2\") pod \"barbican-db-create-chrnq\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.902261 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.947104 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-74s6m"] Nov 25 20:43:23 crc kubenswrapper[4983]: E1125 20:43:23.947644 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1deb958-bfd0-4b82-bbf7-823375a53e6b" containerName="glance-db-sync" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.947728 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1deb958-bfd0-4b82-bbf7-823375a53e6b" containerName="glance-db-sync" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.947963 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1deb958-bfd0-4b82-bbf7-823375a53e6b" containerName="glance-db-sync" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.948649 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.958418 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b4648d-344a-4539-af4c-ddf7c8a23068-operator-scripts\") pod \"cinder-643c-account-create-update-fdrdb\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.958477 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trnxh\" (UniqueName: \"kubernetes.io/projected/82817911-bcba-4071-a24b-c6dbb6f1973d-kube-api-access-trnxh\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.958510 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-combined-ca-bundle\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.958599 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v75fj\" (UniqueName: \"kubernetes.io/projected/e8b4648d-344a-4539-af4c-ddf7c8a23068-kube-api-access-v75fj\") pod \"cinder-643c-account-create-update-fdrdb\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.958646 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-config-data\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.960294 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b4648d-344a-4539-af4c-ddf7c8a23068-operator-scripts\") pod \"cinder-643c-account-create-update-fdrdb\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.962714 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-74s6m"] Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.968592 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-combined-ca-bundle\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.991771 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trnxh\" (UniqueName: \"kubernetes.io/projected/82817911-bcba-4071-a24b-c6dbb6f1973d-kube-api-access-trnxh\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.991949 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v75fj\" (UniqueName: \"kubernetes.io/projected/e8b4648d-344a-4539-af4c-ddf7c8a23068-kube-api-access-v75fj\") pod \"cinder-643c-account-create-update-fdrdb\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:23 crc kubenswrapper[4983]: I1125 20:43:23.997915 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-config-data\") pod \"keystone-db-sync-wbb46\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.003809 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.051861 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.066670 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4cc5-account-create-update-q4b96"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.069014 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.069779 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-config-data\") pod \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.074241 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7txk2\" (UniqueName: \"kubernetes.io/projected/a1deb958-bfd0-4b82-bbf7-823375a53e6b-kube-api-access-7txk2\") pod \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.074331 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-combined-ca-bundle\") pod \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.074536 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-db-sync-config-data\") pod \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\" (UID: \"a1deb958-bfd0-4b82-bbf7-823375a53e6b\") " Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.075035 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2s22\" (UniqueName: \"kubernetes.io/projected/272f4d2e-dc3a-4db5-a10e-891f8143f934-kube-api-access-z2s22\") pod \"neutron-db-create-74s6m\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.075239 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272f4d2e-dc3a-4db5-a10e-891f8143f934-operator-scripts\") pod \"neutron-db-create-74s6m\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.075383 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.084673 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4cc5-account-create-update-q4b96"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.084806 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a1deb958-bfd0-4b82-bbf7-823375a53e6b" (UID: "a1deb958-bfd0-4b82-bbf7-823375a53e6b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.085043 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.085363 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1deb958-bfd0-4b82-bbf7-823375a53e6b-kube-api-access-7txk2" (OuterVolumeSpecName: "kube-api-access-7txk2") pod "a1deb958-bfd0-4b82-bbf7-823375a53e6b" (UID: "a1deb958-bfd0-4b82-bbf7-823375a53e6b"). InnerVolumeSpecName "kube-api-access-7txk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.130438 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1deb958-bfd0-4b82-bbf7-823375a53e6b" (UID: "a1deb958-bfd0-4b82-bbf7-823375a53e6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.173616 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-config-data" (OuterVolumeSpecName: "config-data") pod "a1deb958-bfd0-4b82-bbf7-823375a53e6b" (UID: "a1deb958-bfd0-4b82-bbf7-823375a53e6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.175964 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99dht\" (UniqueName: \"kubernetes.io/projected/06715857-e8a8-442e-8457-79b6e5506db4-kube-api-access-99dht\") pod \"neutron-4cc5-account-create-update-q4b96\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176064 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272f4d2e-dc3a-4db5-a10e-891f8143f934-operator-scripts\") pod \"neutron-db-create-74s6m\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176141 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06715857-e8a8-442e-8457-79b6e5506db4-operator-scripts\") pod \"neutron-4cc5-account-create-update-q4b96\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176190 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2s22\" (UniqueName: \"kubernetes.io/projected/272f4d2e-dc3a-4db5-a10e-891f8143f934-kube-api-access-z2s22\") pod \"neutron-db-create-74s6m\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176273 4983 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176292 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176303 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7txk2\" (UniqueName: \"kubernetes.io/projected/a1deb958-bfd0-4b82-bbf7-823375a53e6b-kube-api-access-7txk2\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.176314 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1deb958-bfd0-4b82-bbf7-823375a53e6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.177020 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272f4d2e-dc3a-4db5-a10e-891f8143f934-operator-scripts\") pod \"neutron-db-create-74s6m\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.210857 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2s22\" (UniqueName: \"kubernetes.io/projected/272f4d2e-dc3a-4db5-a10e-891f8143f934-kube-api-access-z2s22\") pod \"neutron-db-create-74s6m\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.246477 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qcbkt" event={"ID":"a1deb958-bfd0-4b82-bbf7-823375a53e6b","Type":"ContainerDied","Data":"b3b05d2a804d221299358320c9fd37f12ed4e29956e37089213aa301097612d6"} Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.246569 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b05d2a804d221299358320c9fd37f12ed4e29956e37089213aa301097612d6" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.246730 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qcbkt" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.278313 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06715857-e8a8-442e-8457-79b6e5506db4-operator-scripts\") pod \"neutron-4cc5-account-create-update-q4b96\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.278390 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06715857-e8a8-442e-8457-79b6e5506db4-operator-scripts\") pod \"neutron-4cc5-account-create-update-q4b96\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.278513 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99dht\" (UniqueName: \"kubernetes.io/projected/06715857-e8a8-442e-8457-79b6e5506db4-kube-api-access-99dht\") pod \"neutron-4cc5-account-create-update-q4b96\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.281762 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4vwwd"] Nov 25 20:43:24 crc kubenswrapper[4983]: W1125 20:43:24.285031 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode111c438_3824_4fac_9db8_ce47d6974e6d.slice/crio-4f20a8e7e2487720d81094e3a334665597eb6661ee50d1b649239c5105677f16 WatchSource:0}: Error finding container 4f20a8e7e2487720d81094e3a334665597eb6661ee50d1b649239c5105677f16: Status 404 returned error can't find the container with id 4f20a8e7e2487720d81094e3a334665597eb6661ee50d1b649239c5105677f16 Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.290285 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.306283 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99dht\" (UniqueName: \"kubernetes.io/projected/06715857-e8a8-442e-8457-79b6e5506db4-kube-api-access-99dht\") pod \"neutron-4cc5-account-create-update-q4b96\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.325805 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.405345 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.535785 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-acb0-account-create-update-n7fkg"] Nov 25 20:43:24 crc kubenswrapper[4983]: W1125 20:43:24.540056 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e138f00_2737_483b_ad2a_afd28c35e48b.slice/crio-8e83a3924f44c25426c4c3d1eecb82a0ee9a91ee4007f0054fcdf2504dedb0f2 WatchSource:0}: Error finding container 8e83a3924f44c25426c4c3d1eecb82a0ee9a91ee4007f0054fcdf2504dedb0f2: Status 404 returned error can't find the container with id 8e83a3924f44c25426c4c3d1eecb82a0ee9a91ee4007f0054fcdf2504dedb0f2 Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.622155 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-chrnq"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.645075 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wbb46"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.695544 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-wmzt8"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.696211 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" podUID="46474651-de69-46e5-af7e-f29b0eacf704" containerName="dnsmasq-dns" containerID="cri-o://730fd13d57dbd6535c0db7c643d82e59791d7da78c563108aac3166b70f504ee" gracePeriod=10 Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.765105 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5m84b"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.772387 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.812682 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5m84b"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.815918 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.815960 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr2jd\" (UniqueName: \"kubernetes.io/projected/f58c94e7-0496-482c-aa85-079d35d0bd31-kube-api-access-gr2jd\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.815986 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-config\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.816046 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.816079 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.816148 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.921143 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.921240 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.921286 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.921306 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr2jd\" (UniqueName: \"kubernetes.io/projected/f58c94e7-0496-482c-aa85-079d35d0bd31-kube-api-access-gr2jd\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.921323 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-config\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.921371 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.922327 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.929079 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.929670 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.930229 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.931101 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-config\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.973221 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-643c-account-create-update-fdrdb"] Nov 25 20:43:24 crc kubenswrapper[4983]: I1125 20:43:24.982249 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr2jd\" (UniqueName: \"kubernetes.io/projected/f58c94e7-0496-482c-aa85-079d35d0bd31-kube-api-access-gr2jd\") pod \"dnsmasq-dns-74f6bcbc87-5m84b\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.078469 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-74s6m"] Nov 25 20:43:25 crc kubenswrapper[4983]: W1125 20:43:25.091402 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod272f4d2e_dc3a_4db5_a10e_891f8143f934.slice/crio-e8b6686c92ffc3cdf142321119219f6bdf56409da4d907df99ee43cd08d080b5 WatchSource:0}: Error finding container e8b6686c92ffc3cdf142321119219f6bdf56409da4d907df99ee43cd08d080b5: Status 404 returned error can't find the container with id e8b6686c92ffc3cdf142321119219f6bdf56409da4d907df99ee43cd08d080b5 Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.247211 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.278604 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-74s6m" event={"ID":"272f4d2e-dc3a-4db5-a10e-891f8143f934","Type":"ContainerStarted","Data":"e8b6686c92ffc3cdf142321119219f6bdf56409da4d907df99ee43cd08d080b5"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.284257 4983 generic.go:334] "Generic (PLEG): container finished" podID="e111c438-3824-4fac-9db8-ce47d6974e6d" containerID="5fcf628b4ad5b400cd37b9d2299e2e52345e55871cb727acde3a2411c1f32ac3" exitCode=0 Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.284330 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vwwd" event={"ID":"e111c438-3824-4fac-9db8-ce47d6974e6d","Type":"ContainerDied","Data":"5fcf628b4ad5b400cd37b9d2299e2e52345e55871cb727acde3a2411c1f32ac3"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.284363 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vwwd" event={"ID":"e111c438-3824-4fac-9db8-ce47d6974e6d","Type":"ContainerStarted","Data":"4f20a8e7e2487720d81094e3a334665597eb6661ee50d1b649239c5105677f16"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.288539 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-acb0-account-create-update-n7fkg" event={"ID":"5e138f00-2737-483b-ad2a-afd28c35e48b","Type":"ContainerStarted","Data":"2ca64f5188198958d454608cded6d3ba68059323a30eb2be7c1c5ceb9ba1b025"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.288602 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-acb0-account-create-update-n7fkg" event={"ID":"5e138f00-2737-483b-ad2a-afd28c35e48b","Type":"ContainerStarted","Data":"8e83a3924f44c25426c4c3d1eecb82a0ee9a91ee4007f0054fcdf2504dedb0f2"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.299578 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-chrnq" event={"ID":"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8","Type":"ContainerStarted","Data":"61f182f7d550bc1e1cbbc455ed7ae33eccb4d02fe6091e934865cc72a74c1a0f"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.299639 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-chrnq" event={"ID":"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8","Type":"ContainerStarted","Data":"7d2f55d2039f15cb29dbea1270ce970f7d2b124aff75caf62d934aa87fe00292"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.319066 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-643c-account-create-update-fdrdb" event={"ID":"e8b4648d-344a-4539-af4c-ddf7c8a23068","Type":"ContainerStarted","Data":"b559c3716b2a62a667298c768c865d8a72df6254d029ee4b7b279c846d248098"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.319122 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-643c-account-create-update-fdrdb" event={"ID":"e8b4648d-344a-4539-af4c-ddf7c8a23068","Type":"ContainerStarted","Data":"374a8e3d920a271f65cfe0de90259bd819cf13d915a16a581be9d9cbfff5e289"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.322639 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wbb46" event={"ID":"82817911-bcba-4071-a24b-c6dbb6f1973d","Type":"ContainerStarted","Data":"9fc1aa18264c54c8f6008377ba9f62f2f150f4171d89c651d8fde0d03643bb2a"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.325538 4983 generic.go:334] "Generic (PLEG): container finished" podID="46474651-de69-46e5-af7e-f29b0eacf704" containerID="730fd13d57dbd6535c0db7c643d82e59791d7da78c563108aac3166b70f504ee" exitCode=0 Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.325608 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" event={"ID":"46474651-de69-46e5-af7e-f29b0eacf704","Type":"ContainerDied","Data":"730fd13d57dbd6535c0db7c643d82e59791d7da78c563108aac3166b70f504ee"} Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.338383 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-acb0-account-create-update-n7fkg" podStartSLOduration=2.3383492280000002 podStartE2EDuration="2.338349228s" podCreationTimestamp="2025-11-25 20:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:25.328763224 +0000 UTC m=+986.441296616" watchObservedRunningTime="2025-11-25 20:43:25.338349228 +0000 UTC m=+986.450882620" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.364337 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-chrnq" podStartSLOduration=2.364318815 podStartE2EDuration="2.364318815s" podCreationTimestamp="2025-11-25 20:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:25.358796249 +0000 UTC m=+986.471329661" watchObservedRunningTime="2025-11-25 20:43:25.364318815 +0000 UTC m=+986.476852207" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.366093 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.381795 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-643c-account-create-update-fdrdb" podStartSLOduration=2.381771037 podStartE2EDuration="2.381771037s" podCreationTimestamp="2025-11-25 20:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:25.380153285 +0000 UTC m=+986.492686687" watchObservedRunningTime="2025-11-25 20:43:25.381771037 +0000 UTC m=+986.494304429" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.413646 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4cc5-account-create-update-q4b96"] Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.441602 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4767m\" (UniqueName: \"kubernetes.io/projected/46474651-de69-46e5-af7e-f29b0eacf704-kube-api-access-4767m\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.441682 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-sb\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.441739 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-config\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.441876 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-svc\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.441908 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-swift-storage-0\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.441931 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.453808 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46474651-de69-46e5-af7e-f29b0eacf704-kube-api-access-4767m" (OuterVolumeSpecName: "kube-api-access-4767m") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704"). InnerVolumeSpecName "kube-api-access-4767m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.512309 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-config" (OuterVolumeSpecName: "config") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.521170 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.534622 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.544509 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.544786 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.544854 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4767m\" (UniqueName: \"kubernetes.io/projected/46474651-de69-46e5-af7e-f29b0eacf704-kube-api-access-4767m\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.544932 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:25 crc kubenswrapper[4983]: E1125 20:43:25.571629 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb podName:46474651-de69-46e5-af7e-f29b0eacf704 nodeName:}" failed. No retries permitted until 2025-11-25 20:43:26.071596054 +0000 UTC m=+987.184129446 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704") : error deleting /var/lib/kubelet/pods/46474651-de69-46e5-af7e-f29b0eacf704/volume-subpaths: remove /var/lib/kubelet/pods/46474651-de69-46e5-af7e-f29b0eacf704/volume-subpaths: no such file or directory Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.572842 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.646998 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:25 crc kubenswrapper[4983]: I1125 20:43:25.779934 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5m84b"] Nov 25 20:43:25 crc kubenswrapper[4983]: W1125 20:43:25.791852 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf58c94e7_0496_482c_aa85_079d35d0bd31.slice/crio-3932098fb5acb3b128e61efd10cd3fe1e18d6e063177f57df6e2ae563282ab40 WatchSource:0}: Error finding container 3932098fb5acb3b128e61efd10cd3fe1e18d6e063177f57df6e2ae563282ab40: Status 404 returned error can't find the container with id 3932098fb5acb3b128e61efd10cd3fe1e18d6e063177f57df6e2ae563282ab40 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.156149 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb\") pod \"46474651-de69-46e5-af7e-f29b0eacf704\" (UID: \"46474651-de69-46e5-af7e-f29b0eacf704\") " Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.161382 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46474651-de69-46e5-af7e-f29b0eacf704" (UID: "46474651-de69-46e5-af7e-f29b0eacf704"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.258506 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46474651-de69-46e5-af7e-f29b0eacf704-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.341896 4983 generic.go:334] "Generic (PLEG): container finished" podID="06715857-e8a8-442e-8457-79b6e5506db4" containerID="24a6f8e17355f0a6c7fb386636b3bb78048cec132699b90249e1ea1a7ad876df" exitCode=0 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.341978 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4cc5-account-create-update-q4b96" event={"ID":"06715857-e8a8-442e-8457-79b6e5506db4","Type":"ContainerDied","Data":"24a6f8e17355f0a6c7fb386636b3bb78048cec132699b90249e1ea1a7ad876df"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.342039 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4cc5-account-create-update-q4b96" event={"ID":"06715857-e8a8-442e-8457-79b6e5506db4","Type":"ContainerStarted","Data":"d7e7d40f9778f19ece27c89b361f08d3e27020edc9f540cf330aa4b9acb7d43b"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.344790 4983 generic.go:334] "Generic (PLEG): container finished" podID="5e138f00-2737-483b-ad2a-afd28c35e48b" containerID="2ca64f5188198958d454608cded6d3ba68059323a30eb2be7c1c5ceb9ba1b025" exitCode=0 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.344885 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-acb0-account-create-update-n7fkg" event={"ID":"5e138f00-2737-483b-ad2a-afd28c35e48b","Type":"ContainerDied","Data":"2ca64f5188198958d454608cded6d3ba68059323a30eb2be7c1c5ceb9ba1b025"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.349546 4983 generic.go:334] "Generic (PLEG): container finished" podID="47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" containerID="61f182f7d550bc1e1cbbc455ed7ae33eccb4d02fe6091e934865cc72a74c1a0f" exitCode=0 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.349668 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-chrnq" event={"ID":"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8","Type":"ContainerDied","Data":"61f182f7d550bc1e1cbbc455ed7ae33eccb4d02fe6091e934865cc72a74c1a0f"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.351374 4983 generic.go:334] "Generic (PLEG): container finished" podID="e8b4648d-344a-4539-af4c-ddf7c8a23068" containerID="b559c3716b2a62a667298c768c865d8a72df6254d029ee4b7b279c846d248098" exitCode=0 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.351448 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-643c-account-create-update-fdrdb" event={"ID":"e8b4648d-344a-4539-af4c-ddf7c8a23068","Type":"ContainerDied","Data":"b559c3716b2a62a667298c768c865d8a72df6254d029ee4b7b279c846d248098"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.356421 4983 generic.go:334] "Generic (PLEG): container finished" podID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerID="d26bb9acc5feaf9274d27b3957f50b1c7d68558b45d7c64729a3a6bfd17c6a7f" exitCode=0 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.356544 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" event={"ID":"f58c94e7-0496-482c-aa85-079d35d0bd31","Type":"ContainerDied","Data":"d26bb9acc5feaf9274d27b3957f50b1c7d68558b45d7c64729a3a6bfd17c6a7f"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.356614 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" event={"ID":"f58c94e7-0496-482c-aa85-079d35d0bd31","Type":"ContainerStarted","Data":"3932098fb5acb3b128e61efd10cd3fe1e18d6e063177f57df6e2ae563282ab40"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.376434 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" event={"ID":"46474651-de69-46e5-af7e-f29b0eacf704","Type":"ContainerDied","Data":"ce2741e3f3308d0fb187c4dd747aa7efbda1fcdc983b074c8b8782cb9c8502e6"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.376518 4983 scope.go:117] "RemoveContainer" containerID="730fd13d57dbd6535c0db7c643d82e59791d7da78c563108aac3166b70f504ee" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.376757 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-wmzt8" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.396462 4983 generic.go:334] "Generic (PLEG): container finished" podID="272f4d2e-dc3a-4db5-a10e-891f8143f934" containerID="bba2a6f29e1df6f26216b06258d0b8850199e07f4692208f41bb8a4415c5b716" exitCode=0 Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.396666 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-74s6m" event={"ID":"272f4d2e-dc3a-4db5-a10e-891f8143f934","Type":"ContainerDied","Data":"bba2a6f29e1df6f26216b06258d0b8850199e07f4692208f41bb8a4415c5b716"} Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.498801 4983 scope.go:117] "RemoveContainer" containerID="c75c55f9a94049839f38c0efd0896cc0f3db878a93e74292c7f37cef6a145b38" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.534149 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-wmzt8"] Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.540162 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-wmzt8"] Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.722207 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.773639 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwdx7\" (UniqueName: \"kubernetes.io/projected/e111c438-3824-4fac-9db8-ce47d6974e6d-kube-api-access-cwdx7\") pod \"e111c438-3824-4fac-9db8-ce47d6974e6d\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.775762 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e111c438-3824-4fac-9db8-ce47d6974e6d-operator-scripts\") pod \"e111c438-3824-4fac-9db8-ce47d6974e6d\" (UID: \"e111c438-3824-4fac-9db8-ce47d6974e6d\") " Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.776506 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e111c438-3824-4fac-9db8-ce47d6974e6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e111c438-3824-4fac-9db8-ce47d6974e6d" (UID: "e111c438-3824-4fac-9db8-ce47d6974e6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.782055 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e111c438-3824-4fac-9db8-ce47d6974e6d-kube-api-access-cwdx7" (OuterVolumeSpecName: "kube-api-access-cwdx7") pod "e111c438-3824-4fac-9db8-ce47d6974e6d" (UID: "e111c438-3824-4fac-9db8-ce47d6974e6d"). InnerVolumeSpecName "kube-api-access-cwdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.878465 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e111c438-3824-4fac-9db8-ce47d6974e6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:26 crc kubenswrapper[4983]: I1125 20:43:26.878503 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwdx7\" (UniqueName: \"kubernetes.io/projected/e111c438-3824-4fac-9db8-ce47d6974e6d-kube-api-access-cwdx7\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:27 crc kubenswrapper[4983]: I1125 20:43:27.413276 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4vwwd" event={"ID":"e111c438-3824-4fac-9db8-ce47d6974e6d","Type":"ContainerDied","Data":"4f20a8e7e2487720d81094e3a334665597eb6661ee50d1b649239c5105677f16"} Nov 25 20:43:27 crc kubenswrapper[4983]: I1125 20:43:27.413631 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f20a8e7e2487720d81094e3a334665597eb6661ee50d1b649239c5105677f16" Nov 25 20:43:27 crc kubenswrapper[4983]: I1125 20:43:27.413384 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4vwwd" Nov 25 20:43:27 crc kubenswrapper[4983]: I1125 20:43:27.424734 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" event={"ID":"f58c94e7-0496-482c-aa85-079d35d0bd31","Type":"ContainerStarted","Data":"b1e16be10d893541defc632a36c2712e2a325224e052d2f75333f7a05717a29a"} Nov 25 20:43:27 crc kubenswrapper[4983]: I1125 20:43:27.462261 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podStartSLOduration=3.462240333 podStartE2EDuration="3.462240333s" podCreationTimestamp="2025-11-25 20:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:27.458565826 +0000 UTC m=+988.571099218" watchObservedRunningTime="2025-11-25 20:43:27.462240333 +0000 UTC m=+988.574773725" Nov 25 20:43:27 crc kubenswrapper[4983]: I1125 20:43:27.619834 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46474651-de69-46e5-af7e-f29b0eacf704" path="/var/lib/kubelet/pods/46474651-de69-46e5-af7e-f29b0eacf704/volumes" Nov 25 20:43:28 crc kubenswrapper[4983]: I1125 20:43:28.433663 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.441336 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.452987 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.453664 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-74s6m" event={"ID":"272f4d2e-dc3a-4db5-a10e-891f8143f934","Type":"ContainerDied","Data":"e8b6686c92ffc3cdf142321119219f6bdf56409da4d907df99ee43cd08d080b5"} Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.453788 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8b6686c92ffc3cdf142321119219f6bdf56409da4d907df99ee43cd08d080b5" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.456819 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4cc5-account-create-update-q4b96" event={"ID":"06715857-e8a8-442e-8457-79b6e5506db4","Type":"ContainerDied","Data":"d7e7d40f9778f19ece27c89b361f08d3e27020edc9f540cf330aa4b9acb7d43b"} Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.456903 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e7d40f9778f19ece27c89b361f08d3e27020edc9f540cf330aa4b9acb7d43b" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.462712 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-acb0-account-create-update-n7fkg" event={"ID":"5e138f00-2737-483b-ad2a-afd28c35e48b","Type":"ContainerDied","Data":"8e83a3924f44c25426c4c3d1eecb82a0ee9a91ee4007f0054fcdf2504dedb0f2"} Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.462783 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e83a3924f44c25426c4c3d1eecb82a0ee9a91ee4007f0054fcdf2504dedb0f2" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.463005 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-acb0-account-create-update-n7fkg" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.465813 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-chrnq" event={"ID":"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8","Type":"ContainerDied","Data":"7d2f55d2039f15cb29dbea1270ce970f7d2b124aff75caf62d934aa87fe00292"} Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.465877 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2f55d2039f15cb29dbea1270ce970f7d2b124aff75caf62d934aa87fe00292" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.469718 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-643c-account-create-update-fdrdb" event={"ID":"e8b4648d-344a-4539-af4c-ddf7c8a23068","Type":"ContainerDied","Data":"374a8e3d920a271f65cfe0de90259bd819cf13d915a16a581be9d9cbfff5e289"} Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.469754 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374a8e3d920a271f65cfe0de90259bd819cf13d915a16a581be9d9cbfff5e289" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.469805 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-643c-account-create-update-fdrdb" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.472087 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.479662 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.502229 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549110 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06715857-e8a8-442e-8457-79b6e5506db4-operator-scripts\") pod \"06715857-e8a8-442e-8457-79b6e5506db4\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549151 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxvl\" (UniqueName: \"kubernetes.io/projected/5e138f00-2737-483b-ad2a-afd28c35e48b-kube-api-access-rfxvl\") pod \"5e138f00-2737-483b-ad2a-afd28c35e48b\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549232 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e138f00-2737-483b-ad2a-afd28c35e48b-operator-scripts\") pod \"5e138f00-2737-483b-ad2a-afd28c35e48b\" (UID: \"5e138f00-2737-483b-ad2a-afd28c35e48b\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549259 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2s22\" (UniqueName: \"kubernetes.io/projected/272f4d2e-dc3a-4db5-a10e-891f8143f934-kube-api-access-z2s22\") pod \"272f4d2e-dc3a-4db5-a10e-891f8143f934\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549279 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-operator-scripts\") pod \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549309 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b4648d-344a-4539-af4c-ddf7c8a23068-operator-scripts\") pod \"e8b4648d-344a-4539-af4c-ddf7c8a23068\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549412 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99dht\" (UniqueName: \"kubernetes.io/projected/06715857-e8a8-442e-8457-79b6e5506db4-kube-api-access-99dht\") pod \"06715857-e8a8-442e-8457-79b6e5506db4\" (UID: \"06715857-e8a8-442e-8457-79b6e5506db4\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549431 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v75fj\" (UniqueName: \"kubernetes.io/projected/e8b4648d-344a-4539-af4c-ddf7c8a23068-kube-api-access-v75fj\") pod \"e8b4648d-344a-4539-af4c-ddf7c8a23068\" (UID: \"e8b4648d-344a-4539-af4c-ddf7c8a23068\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549474 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272f4d2e-dc3a-4db5-a10e-891f8143f934-operator-scripts\") pod \"272f4d2e-dc3a-4db5-a10e-891f8143f934\" (UID: \"272f4d2e-dc3a-4db5-a10e-891f8143f934\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.549494 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8c2\" (UniqueName: \"kubernetes.io/projected/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-kube-api-access-rb8c2\") pod \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\" (UID: \"47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8\") " Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.550736 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" (UID: "47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.551091 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e138f00-2737-483b-ad2a-afd28c35e48b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e138f00-2737-483b-ad2a-afd28c35e48b" (UID: "5e138f00-2737-483b-ad2a-afd28c35e48b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.551632 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272f4d2e-dc3a-4db5-a10e-891f8143f934-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "272f4d2e-dc3a-4db5-a10e-891f8143f934" (UID: "272f4d2e-dc3a-4db5-a10e-891f8143f934"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.551731 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8b4648d-344a-4539-af4c-ddf7c8a23068-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8b4648d-344a-4539-af4c-ddf7c8a23068" (UID: "e8b4648d-344a-4539-af4c-ddf7c8a23068"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.552149 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06715857-e8a8-442e-8457-79b6e5506db4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06715857-e8a8-442e-8457-79b6e5506db4" (UID: "06715857-e8a8-442e-8457-79b6e5506db4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.553087 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8b4648d-344a-4539-af4c-ddf7c8a23068-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.553115 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/272f4d2e-dc3a-4db5-a10e-891f8143f934-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.553128 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06715857-e8a8-442e-8457-79b6e5506db4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.553142 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e138f00-2737-483b-ad2a-afd28c35e48b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.553154 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.555579 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-kube-api-access-rb8c2" (OuterVolumeSpecName: "kube-api-access-rb8c2") pod "47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" (UID: "47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8"). InnerVolumeSpecName "kube-api-access-rb8c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.555628 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e138f00-2737-483b-ad2a-afd28c35e48b-kube-api-access-rfxvl" (OuterVolumeSpecName: "kube-api-access-rfxvl") pod "5e138f00-2737-483b-ad2a-afd28c35e48b" (UID: "5e138f00-2737-483b-ad2a-afd28c35e48b"). InnerVolumeSpecName "kube-api-access-rfxvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.556194 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06715857-e8a8-442e-8457-79b6e5506db4-kube-api-access-99dht" (OuterVolumeSpecName: "kube-api-access-99dht") pod "06715857-e8a8-442e-8457-79b6e5506db4" (UID: "06715857-e8a8-442e-8457-79b6e5506db4"). InnerVolumeSpecName "kube-api-access-99dht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.557413 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272f4d2e-dc3a-4db5-a10e-891f8143f934-kube-api-access-z2s22" (OuterVolumeSpecName: "kube-api-access-z2s22") pod "272f4d2e-dc3a-4db5-a10e-891f8143f934" (UID: "272f4d2e-dc3a-4db5-a10e-891f8143f934"). InnerVolumeSpecName "kube-api-access-z2s22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.557742 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b4648d-344a-4539-af4c-ddf7c8a23068-kube-api-access-v75fj" (OuterVolumeSpecName: "kube-api-access-v75fj") pod "e8b4648d-344a-4539-af4c-ddf7c8a23068" (UID: "e8b4648d-344a-4539-af4c-ddf7c8a23068"). InnerVolumeSpecName "kube-api-access-v75fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.654083 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99dht\" (UniqueName: \"kubernetes.io/projected/06715857-e8a8-442e-8457-79b6e5506db4-kube-api-access-99dht\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.654129 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v75fj\" (UniqueName: \"kubernetes.io/projected/e8b4648d-344a-4539-af4c-ddf7c8a23068-kube-api-access-v75fj\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.654139 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb8c2\" (UniqueName: \"kubernetes.io/projected/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8-kube-api-access-rb8c2\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.654148 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxvl\" (UniqueName: \"kubernetes.io/projected/5e138f00-2737-483b-ad2a-afd28c35e48b-kube-api-access-rfxvl\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:30 crc kubenswrapper[4983]: I1125 20:43:30.654157 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2s22\" (UniqueName: \"kubernetes.io/projected/272f4d2e-dc3a-4db5-a10e-891f8143f934-kube-api-access-z2s22\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:31 crc kubenswrapper[4983]: I1125 20:43:31.478103 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-74s6m" Nov 25 20:43:31 crc kubenswrapper[4983]: I1125 20:43:31.481289 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wbb46" event={"ID":"82817911-bcba-4071-a24b-c6dbb6f1973d","Type":"ContainerStarted","Data":"303d8a8124cd096bd2b4359a1a71c40b7fe28f3e40e29d21c2df4a2223d429d2"} Nov 25 20:43:31 crc kubenswrapper[4983]: I1125 20:43:31.482772 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4cc5-account-create-update-q4b96" Nov 25 20:43:31 crc kubenswrapper[4983]: I1125 20:43:31.488987 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-chrnq" Nov 25 20:43:31 crc kubenswrapper[4983]: I1125 20:43:31.528090 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wbb46" podStartSLOduration=2.9293766359999998 podStartE2EDuration="8.528065737s" podCreationTimestamp="2025-11-25 20:43:23 +0000 UTC" firstStartedPulling="2025-11-25 20:43:24.698619939 +0000 UTC m=+985.811153331" lastFinishedPulling="2025-11-25 20:43:30.297309 +0000 UTC m=+991.409842432" observedRunningTime="2025-11-25 20:43:31.507684397 +0000 UTC m=+992.620217789" watchObservedRunningTime="2025-11-25 20:43:31.528065737 +0000 UTC m=+992.640599129" Nov 25 20:43:33 crc kubenswrapper[4983]: I1125 20:43:33.497705 4983 generic.go:334] "Generic (PLEG): container finished" podID="82817911-bcba-4071-a24b-c6dbb6f1973d" containerID="303d8a8124cd096bd2b4359a1a71c40b7fe28f3e40e29d21c2df4a2223d429d2" exitCode=0 Nov 25 20:43:33 crc kubenswrapper[4983]: I1125 20:43:33.497815 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wbb46" event={"ID":"82817911-bcba-4071-a24b-c6dbb6f1973d","Type":"ContainerDied","Data":"303d8a8124cd096bd2b4359a1a71c40b7fe28f3e40e29d21c2df4a2223d429d2"} Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.885093 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.938764 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trnxh\" (UniqueName: \"kubernetes.io/projected/82817911-bcba-4071-a24b-c6dbb6f1973d-kube-api-access-trnxh\") pod \"82817911-bcba-4071-a24b-c6dbb6f1973d\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.938866 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-combined-ca-bundle\") pod \"82817911-bcba-4071-a24b-c6dbb6f1973d\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.938917 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-config-data\") pod \"82817911-bcba-4071-a24b-c6dbb6f1973d\" (UID: \"82817911-bcba-4071-a24b-c6dbb6f1973d\") " Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.968367 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82817911-bcba-4071-a24b-c6dbb6f1973d" (UID: "82817911-bcba-4071-a24b-c6dbb6f1973d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.976176 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82817911-bcba-4071-a24b-c6dbb6f1973d-kube-api-access-trnxh" (OuterVolumeSpecName: "kube-api-access-trnxh") pod "82817911-bcba-4071-a24b-c6dbb6f1973d" (UID: "82817911-bcba-4071-a24b-c6dbb6f1973d"). InnerVolumeSpecName "kube-api-access-trnxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:34 crc kubenswrapper[4983]: I1125 20:43:34.986407 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-config-data" (OuterVolumeSpecName: "config-data") pod "82817911-bcba-4071-a24b-c6dbb6f1973d" (UID: "82817911-bcba-4071-a24b-c6dbb6f1973d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.041161 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trnxh\" (UniqueName: \"kubernetes.io/projected/82817911-bcba-4071-a24b-c6dbb6f1973d-kube-api-access-trnxh\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.041190 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.041200 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82817911-bcba-4071-a24b-c6dbb6f1973d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.253740 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.328058 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-25mgn"] Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.328368 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-25mgn" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerName="dnsmasq-dns" containerID="cri-o://3e56b34fe5d51c5ce585d74b4f668b8b6fa7940df3cee881641fadfbd775ce18" gracePeriod=10 Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.624437 4983 generic.go:334] "Generic (PLEG): container finished" podID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerID="3e56b34fe5d51c5ce585d74b4f668b8b6fa7940df3cee881641fadfbd775ce18" exitCode=0 Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.661872 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-25mgn" event={"ID":"c26851c2-2ee8-457b-926b-2ccf02fb308e","Type":"ContainerDied","Data":"3e56b34fe5d51c5ce585d74b4f668b8b6fa7940df3cee881641fadfbd775ce18"} Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.663882 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wbb46" event={"ID":"82817911-bcba-4071-a24b-c6dbb6f1973d","Type":"ContainerDied","Data":"9fc1aa18264c54c8f6008377ba9f62f2f150f4171d89c651d8fde0d03643bb2a"} Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.663940 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc1aa18264c54c8f6008377ba9f62f2f150f4171d89c651d8fde0d03643bb2a" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.664038 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wbb46" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.902697 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2tgml"] Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903142 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e111c438-3824-4fac-9db8-ce47d6974e6d" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903156 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e111c438-3824-4fac-9db8-ce47d6974e6d" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903168 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b4648d-344a-4539-af4c-ddf7c8a23068" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903174 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b4648d-344a-4539-af4c-ddf7c8a23068" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903198 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46474651-de69-46e5-af7e-f29b0eacf704" containerName="init" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903205 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="46474651-de69-46e5-af7e-f29b0eacf704" containerName="init" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903218 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272f4d2e-dc3a-4db5-a10e-891f8143f934" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903224 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="272f4d2e-dc3a-4db5-a10e-891f8143f934" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903232 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46474651-de69-46e5-af7e-f29b0eacf704" containerName="dnsmasq-dns" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903238 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="46474651-de69-46e5-af7e-f29b0eacf704" containerName="dnsmasq-dns" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903248 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e138f00-2737-483b-ad2a-afd28c35e48b" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903254 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e138f00-2737-483b-ad2a-afd28c35e48b" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903268 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82817911-bcba-4071-a24b-c6dbb6f1973d" containerName="keystone-db-sync" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903274 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="82817911-bcba-4071-a24b-c6dbb6f1973d" containerName="keystone-db-sync" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903284 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903290 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: E1125 20:43:35.903296 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06715857-e8a8-442e-8457-79b6e5506db4" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903303 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="06715857-e8a8-442e-8457-79b6e5506db4" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.903473 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b4648d-344a-4539-af4c-ddf7c8a23068" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906125 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="46474651-de69-46e5-af7e-f29b0eacf704" containerName="dnsmasq-dns" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906142 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e138f00-2737-483b-ad2a-afd28c35e48b" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906158 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="272f4d2e-dc3a-4db5-a10e-891f8143f934" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906178 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906194 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e111c438-3824-4fac-9db8-ce47d6974e6d" containerName="mariadb-database-create" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906208 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="82817911-bcba-4071-a24b-c6dbb6f1973d" containerName="keystone-db-sync" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.906216 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="06715857-e8a8-442e-8457-79b6e5506db4" containerName="mariadb-account-create-update" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.907819 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.915641 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p6dsc"] Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.916847 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.930112 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.930224 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.931990 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4pfh" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.932225 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.932349 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.942535 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2tgml"] Nov 25 20:43:35 crc kubenswrapper[4983]: I1125 20:43:35.974162 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p6dsc"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.077158 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zgwv\" (UniqueName: \"kubernetes.io/projected/ed34075b-ee17-40bd-a65e-e531201fb127-kube-api-access-7zgwv\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084125 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-credential-keys\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084208 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084281 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-fernet-keys\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084310 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kskm8\" (UniqueName: \"kubernetes.io/projected/53cf8b10-7201-49c1-8f2b-ce63f211b469-kube-api-access-kskm8\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084340 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-scripts\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084390 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-config-data\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-combined-ca-bundle\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084541 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084615 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084674 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-config\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.084719 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.101170 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.106099 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-768fff7bd9-zjc9n"] Nov 25 20:43:36 crc kubenswrapper[4983]: E1125 20:43:36.106529 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerName="dnsmasq-dns" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.107812 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerName="dnsmasq-dns" Nov 25 20:43:36 crc kubenswrapper[4983]: E1125 20:43:36.107864 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerName="init" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.107871 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerName="init" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.108069 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" containerName="dnsmasq-dns" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.113385 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.118454 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.118691 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.118945 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-6jtlq" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.119141 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.155087 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-768fff7bd9-zjc9n"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.187058 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-combined-ca-bundle\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.187407 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.187518 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.187640 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-config\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.188730 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.188874 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zgwv\" (UniqueName: \"kubernetes.io/projected/ed34075b-ee17-40bd-a65e-e531201fb127-kube-api-access-7zgwv\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.190365 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.190997 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-credential-keys\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.191141 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.191340 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-fernet-keys\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.191466 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskm8\" (UniqueName: \"kubernetes.io/projected/53cf8b10-7201-49c1-8f2b-ce63f211b469-kube-api-access-kskm8\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.191597 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-scripts\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.191695 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.191735 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-config-data\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.193391 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-config\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.194145 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.213046 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.213881 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-fernet-keys\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.216200 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-scripts\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.216271 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.216860 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-config-data\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.220510 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.224594 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-credential-keys\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.225011 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.232277 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.244122 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.260763 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-combined-ca-bundle\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.264171 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zgwv\" (UniqueName: \"kubernetes.io/projected/ed34075b-ee17-40bd-a65e-e531201fb127-kube-api-access-7zgwv\") pod \"dnsmasq-dns-847c4cc679-2tgml\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.266022 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskm8\" (UniqueName: \"kubernetes.io/projected/53cf8b10-7201-49c1-8f2b-ce63f211b469-kube-api-access-kskm8\") pod \"keystone-bootstrap-p6dsc\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296025 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-sb\") pod \"c26851c2-2ee8-457b-926b-2ccf02fb308e\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296161 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-nb\") pod \"c26851c2-2ee8-457b-926b-2ccf02fb308e\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296230 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-dns-svc\") pod \"c26851c2-2ee8-457b-926b-2ccf02fb308e\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296272 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htt75\" (UniqueName: \"kubernetes.io/projected/c26851c2-2ee8-457b-926b-2ccf02fb308e-kube-api-access-htt75\") pod \"c26851c2-2ee8-457b-926b-2ccf02fb308e\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296344 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-config\") pod \"c26851c2-2ee8-457b-926b-2ccf02fb308e\" (UID: \"c26851c2-2ee8-457b-926b-2ccf02fb308e\") " Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296589 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-config-data\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296629 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ed84598-dc8c-4060-b762-1c8240ed61fa-horizon-secret-key\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296650 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx274\" (UniqueName: \"kubernetes.io/projected/6ed84598-dc8c-4060-b762-1c8240ed61fa-kube-api-access-vx274\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296736 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed84598-dc8c-4060-b762-1c8240ed61fa-logs\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.296759 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-scripts\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.302094 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c26851c2-2ee8-457b-926b-2ccf02fb308e-kube-api-access-htt75" (OuterVolumeSpecName: "kube-api-access-htt75") pod "c26851c2-2ee8-457b-926b-2ccf02fb308e" (UID: "c26851c2-2ee8-457b-926b-2ccf02fb308e"). InnerVolumeSpecName "kube-api-access-htt75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.332796 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h4hwg"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.334007 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.337947 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t8tlz" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.341747 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.349739 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.349914 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h4hwg"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.380820 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7dskv"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.382008 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.394286 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.395262 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.395538 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.395723 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pmbck" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397790 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q765t\" (UniqueName: \"kubernetes.io/projected/4fa168af-421e-4a45-8201-13eb69a20830-kube-api-access-q765t\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397830 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-config-data\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397865 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ed84598-dc8c-4060-b762-1c8240ed61fa-horizon-secret-key\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397883 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx274\" (UniqueName: \"kubernetes.io/projected/6ed84598-dc8c-4060-b762-1c8240ed61fa-kube-api-access-vx274\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397901 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-scripts\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397958 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.397980 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398033 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398078 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398111 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed84598-dc8c-4060-b762-1c8240ed61fa-logs\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398284 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-scripts\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398313 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-config-data\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398354 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htt75\" (UniqueName: \"kubernetes.io/projected/c26851c2-2ee8-457b-926b-2ccf02fb308e-kube-api-access-htt75\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.398713 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed84598-dc8c-4060-b762-1c8240ed61fa-logs\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.402837 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-scripts\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.403981 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-config-data\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.438133 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx274\" (UniqueName: \"kubernetes.io/projected/6ed84598-dc8c-4060-b762-1c8240ed61fa-kube-api-access-vx274\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.438337 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ed84598-dc8c-4060-b762-1c8240ed61fa-horizon-secret-key\") pod \"horizon-768fff7bd9-zjc9n\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.461978 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-649c7d5f69-gqrc7"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.467772 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.468978 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.492621 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-sfn6q"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.493839 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500293 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj4wg\" (UniqueName: \"kubernetes.io/projected/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-kube-api-access-rj4wg\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500362 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-db-sync-config-data\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500405 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-etc-machine-id\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500429 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-config-data\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500486 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q765t\" (UniqueName: \"kubernetes.io/projected/4fa168af-421e-4a45-8201-13eb69a20830-kube-api-access-q765t\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500517 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-db-sync-config-data\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500534 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8fn\" (UniqueName: \"kubernetes.io/projected/24119f4e-9bb9-4f12-a031-03ec811465d1-kube-api-access-rh8fn\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500585 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-scripts\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500622 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500661 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-config-data\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500677 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500705 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500741 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-combined-ca-bundle\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500765 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500783 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-scripts\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.500824 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-combined-ca-bundle\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.503884 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.504639 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-log-httpd\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.506597 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-run-httpd\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.508186 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.508381 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qs9rd" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.523104 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-649c7d5f69-gqrc7"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.524331 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.537241 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-scripts\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.554058 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.554612 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q765t\" (UniqueName: \"kubernetes.io/projected/4fa168af-421e-4a45-8201-13eb69a20830-kube-api-access-q765t\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.556086 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7dskv"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.564824 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-config-data\") pod \"ceilometer-0\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.575995 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c26851c2-2ee8-457b-926b-2ccf02fb308e" (UID: "c26851c2-2ee8-457b-926b-2ccf02fb308e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.576172 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c26851c2-2ee8-457b-926b-2ccf02fb308e" (UID: "c26851c2-2ee8-457b-926b-2ccf02fb308e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.602893 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78pr\" (UniqueName: \"kubernetes.io/projected/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-kube-api-access-h78pr\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.602934 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-combined-ca-bundle\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.602969 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-scripts\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.602984 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-combined-ca-bundle\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603014 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj4wg\" (UniqueName: \"kubernetes.io/projected/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-kube-api-access-rj4wg\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603051 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-db-sync-config-data\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603070 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-config-data\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603231 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-etc-machine-id\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603275 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-config\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603301 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-scripts\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603323 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnvf\" (UniqueName: \"kubernetes.io/projected/18b7eb10-6c61-469a-87c6-d263f94dce5d-kube-api-access-ffnvf\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603355 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-logs\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603379 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-combined-ca-bundle\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603406 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-db-sync-config-data\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603423 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8fn\" (UniqueName: \"kubernetes.io/projected/24119f4e-9bb9-4f12-a031-03ec811465d1-kube-api-access-rh8fn\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603475 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-horizon-secret-key\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603516 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-config-data\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603617 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.603632 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.607773 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-config-data\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.608445 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-etc-machine-id\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.611702 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-db-sync-config-data\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.625054 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-scripts\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.626214 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-db-sync-config-data\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.655148 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-combined-ca-bundle\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.668338 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c26851c2-2ee8-457b-926b-2ccf02fb308e" (UID: "c26851c2-2ee8-457b-926b-2ccf02fb308e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.668692 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.668929 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-config" (OuterVolumeSpecName: "config") pod "c26851c2-2ee8-457b-926b-2ccf02fb308e" (UID: "c26851c2-2ee8-457b-926b-2ccf02fb308e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.669686 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-combined-ca-bundle\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.671845 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8fn\" (UniqueName: \"kubernetes.io/projected/24119f4e-9bb9-4f12-a031-03ec811465d1-kube-api-access-rh8fn\") pod \"barbican-db-sync-h4hwg\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.681335 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sfn6q"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.693990 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj4wg\" (UniqueName: \"kubernetes.io/projected/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-kube-api-access-rj4wg\") pod \"cinder-db-sync-7dskv\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.696277 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2tgml"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.707291 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-25mgn" event={"ID":"c26851c2-2ee8-457b-926b-2ccf02fb308e","Type":"ContainerDied","Data":"50e9d7a2f6048553bd55c4c3bfd9e9363709b8fa286235f0e4a2cd88671ea109"} Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.707379 4983 scope.go:117] "RemoveContainer" containerID="3e56b34fe5d51c5ce585d74b4f668b8b6fa7940df3cee881641fadfbd775ce18" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.707873 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-25mgn" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.711854 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-config-data\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.711898 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-config\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.711926 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-scripts\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.711962 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffnvf\" (UniqueName: \"kubernetes.io/projected/18b7eb10-6c61-469a-87c6-d263f94dce5d-kube-api-access-ffnvf\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.711990 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-logs\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.712013 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-combined-ca-bundle\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.712056 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-horizon-secret-key\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.712110 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78pr\" (UniqueName: \"kubernetes.io/projected/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-kube-api-access-h78pr\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.712204 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.712216 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c26851c2-2ee8-457b-926b-2ccf02fb308e-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.716025 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-scripts\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.716444 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-logs\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.732279 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-combined-ca-bundle\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.737826 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffnvf\" (UniqueName: \"kubernetes.io/projected/18b7eb10-6c61-469a-87c6-d263f94dce5d-kube-api-access-ffnvf\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.746611 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78pr\" (UniqueName: \"kubernetes.io/projected/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-kube-api-access-h78pr\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.747420 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.761928 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-config\") pod \"neutron-db-sync-sfn6q\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.764411 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-config-data\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.772113 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-horizon-secret-key\") pod \"horizon-649c7d5f69-gqrc7\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.777143 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dskv" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.826619 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vq68b"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.830992 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.839862 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kgctb"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.841610 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.844740 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-29652" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.844805 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.845141 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.847008 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vq68b"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.859314 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.864680 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kgctb"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.887640 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.889361 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.894116 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.894533 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dkrf2" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.894810 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.895064 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.906676 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936654 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-config-data\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936703 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6eb357f-e3ba-4631-951c-65760c2c707d-logs\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936742 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jj2\" (UniqueName: \"kubernetes.io/projected/a6eb357f-e3ba-4631-951c-65760c2c707d-kube-api-access-98jj2\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936771 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936792 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936816 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-config\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936831 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936853 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsj5r\" (UniqueName: \"kubernetes.io/projected/81a52229-9987-46fe-b40b-a1951d6d0396-kube-api-access-dsj5r\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936871 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-scripts\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936901 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-combined-ca-bundle\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.936955 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:36 crc kubenswrapper[4983]: I1125 20:43:36.951625 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.005759 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-25mgn"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.032362 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-25mgn"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.040969 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041051 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041081 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-config-data\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041102 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6eb357f-e3ba-4631-951c-65760c2c707d-logs\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041143 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jj2\" (UniqueName: \"kubernetes.io/projected/a6eb357f-e3ba-4631-951c-65760c2c707d-kube-api-access-98jj2\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041162 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041179 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-scripts\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041214 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041236 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041269 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-config\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041285 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041308 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsj8\" (UniqueName: \"kubernetes.io/projected/74acde52-084b-4768-9a68-ffb38827f3db-kube-api-access-sxsj8\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041323 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041349 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsj5r\" (UniqueName: \"kubernetes.io/projected/81a52229-9987-46fe-b40b-a1951d6d0396-kube-api-access-dsj5r\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041475 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-scripts\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041501 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-logs\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041521 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041544 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-combined-ca-bundle\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.041571 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-config-data\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.043353 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.048264 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-config\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.051333 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6eb357f-e3ba-4631-951c-65760c2c707d-logs\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.052434 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.052771 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.059097 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-combined-ca-bundle\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.060676 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-config-data\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.062669 4983 scope.go:117] "RemoveContainer" containerID="8d2c64750a3750b42b134ba232389fdb04d7126680a8ba586664c484570e9553" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.066670 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.067460 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsj5r\" (UniqueName: \"kubernetes.io/projected/81a52229-9987-46fe-b40b-a1951d6d0396-kube-api-access-dsj5r\") pod \"dnsmasq-dns-785d8bcb8c-kgctb\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.068283 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.077109 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-scripts\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.094281 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jj2\" (UniqueName: \"kubernetes.io/projected/a6eb357f-e3ba-4631-951c-65760c2c707d-kube-api-access-98jj2\") pod \"placement-db-sync-vq68b\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148209 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-logs\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148291 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148360 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-config-data\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148474 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148724 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-scripts\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148751 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.148824 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.149222 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-logs\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.149874 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxsj8\" (UniqueName: \"kubernetes.io/projected/74acde52-084b-4768-9a68-ffb38827f3db-kube-api-access-sxsj8\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.149900 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.150465 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.157104 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-config-data\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.157639 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-scripts\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.167261 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.178056 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.192738 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxsj8\" (UniqueName: \"kubernetes.io/projected/74acde52-084b-4768-9a68-ffb38827f3db-kube-api-access-sxsj8\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.192942 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.222364 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2tgml"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.239637 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.244864 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.247806 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.247851 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.248737 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.353385 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.371606 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.374114 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vq68b" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.374770 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.374867 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.374903 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.374920 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-logs\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.374959 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.375003 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcfh\" (UniqueName: \"kubernetes.io/projected/22307b23-1606-4b31-8f0d-d24b999df93d-kube-api-access-kkcfh\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.425165 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477141 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477206 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477280 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477306 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477321 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-logs\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477372 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477642 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkcfh\" (UniqueName: \"kubernetes.io/projected/22307b23-1606-4b31-8f0d-d24b999df93d-kube-api-access-kkcfh\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477711 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.477734 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.478082 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.479312 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-logs\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.485133 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.486338 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.496309 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.504137 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.513934 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkcfh\" (UniqueName: \"kubernetes.io/projected/22307b23-1606-4b31-8f0d-d24b999df93d-kube-api-access-kkcfh\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.518431 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.604145 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7dskv"] Nov 25 20:43:37 crc kubenswrapper[4983]: W1125 20:43:37.623917 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcca9d2b3_2f79_4d38_8427_f5bfae9fc314.slice/crio-333c8e95b17d65bf2c2d06f0cb2b3fb00ce33ec9d929fb96311c0c614d7ef4cf WatchSource:0}: Error finding container 333c8e95b17d65bf2c2d06f0cb2b3fb00ce33ec9d929fb96311c0c614d7ef4cf: Status 404 returned error can't find the container with id 333c8e95b17d65bf2c2d06f0cb2b3fb00ce33ec9d929fb96311c0c614d7ef4cf Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.628579 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c26851c2-2ee8-457b-926b-2ccf02fb308e" path="/var/lib/kubelet/pods/c26851c2-2ee8-457b-926b-2ccf02fb308e/volumes" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.658032 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.673874 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-768fff7bd9-zjc9n"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.698383 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p6dsc"] Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.765636 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-768fff7bd9-zjc9n" event={"ID":"6ed84598-dc8c-4060-b762-1c8240ed61fa","Type":"ContainerStarted","Data":"96330f7326c19c2d52e0d608e79567d073e40c1c9ceba871b8e6b48a27f853bd"} Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.796485 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" event={"ID":"ed34075b-ee17-40bd-a65e-e531201fb127","Type":"ContainerStarted","Data":"14742830ec14647b9eae2e7e6bbee2283ab71edba1153b6ddb5147e85f08a6e0"} Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.833338 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dskv" event={"ID":"cca9d2b3-2f79-4d38-8427-f5bfae9fc314","Type":"ContainerStarted","Data":"333c8e95b17d65bf2c2d06f0cb2b3fb00ce33ec9d929fb96311c0c614d7ef4cf"} Nov 25 20:43:37 crc kubenswrapper[4983]: I1125 20:43:37.993068 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-649c7d5f69-gqrc7"] Nov 25 20:43:38 crc kubenswrapper[4983]: I1125 20:43:38.068388 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:43:38 crc kubenswrapper[4983]: I1125 20:43:38.081584 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h4hwg"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.103096 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-sfn6q"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.141153 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kgctb"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.251685 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vq68b"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.555120 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.608972 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.675164 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-649c7d5f69-gqrc7"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.697222 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.726158 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-794b99d65-vhntr"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.727671 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.743263 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-794b99d65-vhntr"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.825617 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-config-data\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.825736 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-scripts\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.825786 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-horizon-secret-key\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.825820 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tphnx\" (UniqueName: \"kubernetes.io/projected/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-kube-api-access-tphnx\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.825844 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-logs\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.844211 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p6dsc" event={"ID":"53cf8b10-7201-49c1-8f2b-ce63f211b469","Type":"ContainerStarted","Data":"f1a2531d5a01d305701e7c10eb82ed23528e2ae08af238c1734d6d8262a588b2"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.927698 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-horizon-secret-key\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.928431 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tphnx\" (UniqueName: \"kubernetes.io/projected/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-kube-api-access-tphnx\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.928456 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-logs\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.928535 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-config-data\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.928616 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-scripts\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.928966 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-logs\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.929400 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-scripts\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.930079 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-config-data\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.945261 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-horizon-secret-key\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:38.948049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tphnx\" (UniqueName: \"kubernetes.io/projected/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-kube-api-access-tphnx\") pod \"horizon-794b99d65-vhntr\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:39.054008 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:39.927286 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:39.927706 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:43:42 crc kubenswrapper[4983]: W1125 20:43:40.910668 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd9b0d36_cf2a_4218_88ef_fd0d74bb603e.slice/crio-fb6e86290dd6ecf6ad5a0b8e77e9be327f6a5630500a79c827b059e943b57016 WatchSource:0}: Error finding container fb6e86290dd6ecf6ad5a0b8e77e9be327f6a5630500a79c827b059e943b57016: Status 404 returned error can't find the container with id fb6e86290dd6ecf6ad5a0b8e77e9be327f6a5630500a79c827b059e943b57016 Nov 25 20:43:42 crc kubenswrapper[4983]: W1125 20:43:40.913227 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa168af_421e_4a45_8201_13eb69a20830.slice/crio-557d7f63b96380c4263a5622c23548333b34c54a0f0182e27ee1806c02e54286 WatchSource:0}: Error finding container 557d7f63b96380c4263a5622c23548333b34c54a0f0182e27ee1806c02e54286: Status 404 returned error can't find the container with id 557d7f63b96380c4263a5622c23548333b34c54a0f0182e27ee1806c02e54286 Nov 25 20:43:42 crc kubenswrapper[4983]: W1125 20:43:40.921092 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b7eb10_6c61_469a_87c6_d263f94dce5d.slice/crio-16d64e8f48157ebf17f9764cb9c391833e46ecacb05d151bd202762174947c38 WatchSource:0}: Error finding container 16d64e8f48157ebf17f9764cb9c391833e46ecacb05d151bd202762174947c38: Status 404 returned error can't find the container with id 16d64e8f48157ebf17f9764cb9c391833e46ecacb05d151bd202762174947c38 Nov 25 20:43:42 crc kubenswrapper[4983]: W1125 20:43:40.922372 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81a52229_9987_46fe_b40b_a1951d6d0396.slice/crio-e6f8d81623a8095ef881d5ac49d21bf8e1d3f9ac831c29c872ff622fb01513fd WatchSource:0}: Error finding container e6f8d81623a8095ef881d5ac49d21bf8e1d3f9ac831c29c872ff622fb01513fd: Status 404 returned error can't find the container with id e6f8d81623a8095ef881d5ac49d21bf8e1d3f9ac831c29c872ff622fb01513fd Nov 25 20:43:42 crc kubenswrapper[4983]: W1125 20:43:40.926462 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6eb357f_e3ba_4631_951c_65760c2c707d.slice/crio-b1d28aa54ca8ff0f59bb5c5ebf622fc44dcdf2cfd61f8c25340672f4015ad8ce WatchSource:0}: Error finding container b1d28aa54ca8ff0f59bb5c5ebf622fc44dcdf2cfd61f8c25340672f4015ad8ce: Status 404 returned error can't find the container with id b1d28aa54ca8ff0f59bb5c5ebf622fc44dcdf2cfd61f8c25340672f4015ad8ce Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.866003 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-649c7d5f69-gqrc7" event={"ID":"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e","Type":"ContainerStarted","Data":"fb6e86290dd6ecf6ad5a0b8e77e9be327f6a5630500a79c827b059e943b57016"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.867540 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vq68b" event={"ID":"a6eb357f-e3ba-4631-951c-65760c2c707d","Type":"ContainerStarted","Data":"b1d28aa54ca8ff0f59bb5c5ebf622fc44dcdf2cfd61f8c25340672f4015ad8ce"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.868657 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h4hwg" event={"ID":"24119f4e-9bb9-4f12-a031-03ec811465d1","Type":"ContainerStarted","Data":"803d6b7a851b491fdb40d83299442b5c369ac06e5fde38dc9e48b111ae7932c7"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.869978 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" event={"ID":"ed34075b-ee17-40bd-a65e-e531201fb127","Type":"ContainerStarted","Data":"5853b69129d210a615bddc384e01d06b634325e95d6bfe48fe55bcfe9967d07e"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.871577 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74acde52-084b-4768-9a68-ffb38827f3db","Type":"ContainerStarted","Data":"bbc2f0d245969b73b55f49154beef70f00b89594542efe633a69e7e9009a5315"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.872592 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" event={"ID":"81a52229-9987-46fe-b40b-a1951d6d0396","Type":"ContainerStarted","Data":"e6f8d81623a8095ef881d5ac49d21bf8e1d3f9ac831c29c872ff622fb01513fd"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.873547 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sfn6q" event={"ID":"18b7eb10-6c61-469a-87c6-d263f94dce5d","Type":"ContainerStarted","Data":"16d64e8f48157ebf17f9764cb9c391833e46ecacb05d151bd202762174947c38"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:41.874799 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerStarted","Data":"557d7f63b96380c4263a5622c23548333b34c54a0f0182e27ee1806c02e54286"} Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:42.773229 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:43:42 crc kubenswrapper[4983]: I1125 20:43:42.984767 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p6dsc" event={"ID":"53cf8b10-7201-49c1-8f2b-ce63f211b469","Type":"ContainerStarted","Data":"8f2ad1959c96277a0ef16b1585edeb1c2106f497c9d7f3deb9228f4b31ad7af6"} Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.034701 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p6dsc" podStartSLOduration=8.034678246 podStartE2EDuration="8.034678246s" podCreationTimestamp="2025-11-25 20:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:43.032969241 +0000 UTC m=+1004.145502633" watchObservedRunningTime="2025-11-25 20:43:43.034678246 +0000 UTC m=+1004.147211638" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.042638 4983 generic.go:334] "Generic (PLEG): container finished" podID="81a52229-9987-46fe-b40b-a1951d6d0396" containerID="3c63ebd7b01f4b9f2503d1923ef9add9edcdb0b530559646256ebb10898a2e83" exitCode=0 Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.042783 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" event={"ID":"81a52229-9987-46fe-b40b-a1951d6d0396","Type":"ContainerDied","Data":"3c63ebd7b01f4b9f2503d1923ef9add9edcdb0b530559646256ebb10898a2e83"} Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.108400 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-794b99d65-vhntr"] Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.118941 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sfn6q" event={"ID":"18b7eb10-6c61-469a-87c6-d263f94dce5d","Type":"ContainerStarted","Data":"9f184195a370bb71623b3b0bafcf2536a9dffdaf9b4bba924fc04637ec784061"} Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.147045 4983 generic.go:334] "Generic (PLEG): container finished" podID="ed34075b-ee17-40bd-a65e-e531201fb127" containerID="5853b69129d210a615bddc384e01d06b634325e95d6bfe48fe55bcfe9967d07e" exitCode=0 Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.147458 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" event={"ID":"ed34075b-ee17-40bd-a65e-e531201fb127","Type":"ContainerDied","Data":"5853b69129d210a615bddc384e01d06b634325e95d6bfe48fe55bcfe9967d07e"} Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.220566 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74acde52-084b-4768-9a68-ffb38827f3db","Type":"ContainerStarted","Data":"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489"} Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.220613 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74acde52-084b-4768-9a68-ffb38827f3db","Type":"ContainerStarted","Data":"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd"} Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.220749 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-log" containerID="cri-o://2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd" gracePeriod=30 Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.220866 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-httpd" containerID="cri-o://96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489" gracePeriod=30 Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.233387 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-sfn6q" podStartSLOduration=7.233362747 podStartE2EDuration="7.233362747s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:43.169905217 +0000 UTC m=+1004.282438609" watchObservedRunningTime="2025-11-25 20:43:43.233362747 +0000 UTC m=+1004.345896129" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.240253 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.370843 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.370821766 podStartE2EDuration="7.370821766s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:43.269215096 +0000 UTC m=+1004.381748488" watchObservedRunningTime="2025-11-25 20:43:43.370821766 +0000 UTC m=+1004.483355158" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.611820 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.729169 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-swift-storage-0\") pod \"ed34075b-ee17-40bd-a65e-e531201fb127\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.729223 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-nb\") pod \"ed34075b-ee17-40bd-a65e-e531201fb127\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.729258 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zgwv\" (UniqueName: \"kubernetes.io/projected/ed34075b-ee17-40bd-a65e-e531201fb127-kube-api-access-7zgwv\") pod \"ed34075b-ee17-40bd-a65e-e531201fb127\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.729394 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-config\") pod \"ed34075b-ee17-40bd-a65e-e531201fb127\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.729450 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-sb\") pod \"ed34075b-ee17-40bd-a65e-e531201fb127\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.729480 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-svc\") pod \"ed34075b-ee17-40bd-a65e-e531201fb127\" (UID: \"ed34075b-ee17-40bd-a65e-e531201fb127\") " Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.734159 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed34075b-ee17-40bd-a65e-e531201fb127-kube-api-access-7zgwv" (OuterVolumeSpecName: "kube-api-access-7zgwv") pod "ed34075b-ee17-40bd-a65e-e531201fb127" (UID: "ed34075b-ee17-40bd-a65e-e531201fb127"). InnerVolumeSpecName "kube-api-access-7zgwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.760962 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-config" (OuterVolumeSpecName: "config") pod "ed34075b-ee17-40bd-a65e-e531201fb127" (UID: "ed34075b-ee17-40bd-a65e-e531201fb127"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.763710 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed34075b-ee17-40bd-a65e-e531201fb127" (UID: "ed34075b-ee17-40bd-a65e-e531201fb127"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.770315 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed34075b-ee17-40bd-a65e-e531201fb127" (UID: "ed34075b-ee17-40bd-a65e-e531201fb127"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.775523 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed34075b-ee17-40bd-a65e-e531201fb127" (UID: "ed34075b-ee17-40bd-a65e-e531201fb127"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.778203 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed34075b-ee17-40bd-a65e-e531201fb127" (UID: "ed34075b-ee17-40bd-a65e-e531201fb127"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.831924 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.831964 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zgwv\" (UniqueName: \"kubernetes.io/projected/ed34075b-ee17-40bd-a65e-e531201fb127-kube-api-access-7zgwv\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.831976 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.831985 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.831993 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:43 crc kubenswrapper[4983]: I1125 20:43:43.832002 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed34075b-ee17-40bd-a65e-e531201fb127-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.055345 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137233 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-httpd-run\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137295 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-scripts\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137436 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137473 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-logs\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137510 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-combined-ca-bundle\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137590 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxsj8\" (UniqueName: \"kubernetes.io/projected/74acde52-084b-4768-9a68-ffb38827f3db-kube-api-access-sxsj8\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137637 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-config-data\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.137677 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-public-tls-certs\") pod \"74acde52-084b-4768-9a68-ffb38827f3db\" (UID: \"74acde52-084b-4768-9a68-ffb38827f3db\") " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.141020 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-logs" (OuterVolumeSpecName: "logs") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.144643 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.146499 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74acde52-084b-4768-9a68-ffb38827f3db-kube-api-access-sxsj8" (OuterVolumeSpecName: "kube-api-access-sxsj8") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "kube-api-access-sxsj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.149343 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.158832 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-scripts" (OuterVolumeSpecName: "scripts") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.210016 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-config-data" (OuterVolumeSpecName: "config-data") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.221363 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.221765 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74acde52-084b-4768-9a68-ffb38827f3db" (UID: "74acde52-084b-4768-9a68-ffb38827f3db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241700 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241761 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241775 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241787 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241802 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxsj8\" (UniqueName: \"kubernetes.io/projected/74acde52-084b-4768-9a68-ffb38827f3db-kube-api-access-sxsj8\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241813 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241822 4983 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74acde52-084b-4768-9a68-ffb38827f3db-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.241833 4983 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74acde52-084b-4768-9a68-ffb38827f3db-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.251209 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22307b23-1606-4b31-8f0d-d24b999df93d","Type":"ContainerStarted","Data":"f37a4aa7b754b7c363a33df467750d063cd8b66e6802005e8f322e1b4a817b64"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.251267 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22307b23-1606-4b31-8f0d-d24b999df93d","Type":"ContainerStarted","Data":"e7328626b3bc6489a329661bc15b3bbc7694322dcf4ae6c4a11dff0437afc0e6"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.259576 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.266623 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" event={"ID":"81a52229-9987-46fe-b40b-a1951d6d0396","Type":"ContainerStarted","Data":"14009cf7136ae8752d942d54dc30f5b24e74800affa9dc49231548fa0eca6ca6"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.266731 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.270012 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794b99d65-vhntr" event={"ID":"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b","Type":"ContainerStarted","Data":"1518f752a8ed2172f96da8fa95ff35f3bf2bf5cbfbb2a490109382217d75493d"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.276453 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.276451 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2tgml" event={"ID":"ed34075b-ee17-40bd-a65e-e531201fb127","Type":"ContainerDied","Data":"14742830ec14647b9eae2e7e6bbee2283ab71edba1153b6ddb5147e85f08a6e0"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.276613 4983 scope.go:117] "RemoveContainer" containerID="5853b69129d210a615bddc384e01d06b634325e95d6bfe48fe55bcfe9967d07e" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.278823 4983 generic.go:334] "Generic (PLEG): container finished" podID="74acde52-084b-4768-9a68-ffb38827f3db" containerID="96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489" exitCode=143 Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.278853 4983 generic.go:334] "Generic (PLEG): container finished" podID="74acde52-084b-4768-9a68-ffb38827f3db" containerID="2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd" exitCode=143 Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.279711 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74acde52-084b-4768-9a68-ffb38827f3db","Type":"ContainerDied","Data":"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.279744 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74acde52-084b-4768-9a68-ffb38827f3db","Type":"ContainerDied","Data":"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.279758 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"74acde52-084b-4768-9a68-ffb38827f3db","Type":"ContainerDied","Data":"bbc2f0d245969b73b55f49154beef70f00b89594542efe633a69e7e9009a5315"} Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.279809 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.343974 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.372125 4983 scope.go:117] "RemoveContainer" containerID="96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.382352 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" podStartSLOduration=8.382324739 podStartE2EDuration="8.382324739s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:44.290271932 +0000 UTC m=+1005.402805324" watchObservedRunningTime="2025-11-25 20:43:44.382324739 +0000 UTC m=+1005.494858131" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.408796 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2tgml"] Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.432674 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2tgml"] Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.447659 4983 scope.go:117] "RemoveContainer" containerID="2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.450493 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.462447 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.473879 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:44 crc kubenswrapper[4983]: E1125 20:43:44.474407 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-log" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.474430 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-log" Nov 25 20:43:44 crc kubenswrapper[4983]: E1125 20:43:44.474454 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed34075b-ee17-40bd-a65e-e531201fb127" containerName="init" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.474461 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed34075b-ee17-40bd-a65e-e531201fb127" containerName="init" Nov 25 20:43:44 crc kubenswrapper[4983]: E1125 20:43:44.474488 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-httpd" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.474497 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-httpd" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.474678 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-log" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.474697 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed34075b-ee17-40bd-a65e-e531201fb127" containerName="init" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.474707 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="74acde52-084b-4768-9a68-ffb38827f3db" containerName="glance-httpd" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.475687 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.483431 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.483710 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.489044 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.503253 4983 scope.go:117] "RemoveContainer" containerID="96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489" Nov 25 20:43:44 crc kubenswrapper[4983]: E1125 20:43:44.504177 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489\": container with ID starting with 96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489 not found: ID does not exist" containerID="96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.504234 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489"} err="failed to get container status \"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489\": rpc error: code = NotFound desc = could not find container \"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489\": container with ID starting with 96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489 not found: ID does not exist" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.504291 4983 scope.go:117] "RemoveContainer" containerID="2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd" Nov 25 20:43:44 crc kubenswrapper[4983]: E1125 20:43:44.505189 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd\": container with ID starting with 2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd not found: ID does not exist" containerID="2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.505207 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd"} err="failed to get container status \"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd\": rpc error: code = NotFound desc = could not find container \"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd\": container with ID starting with 2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd not found: ID does not exist" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.505247 4983 scope.go:117] "RemoveContainer" containerID="96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.508200 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489"} err="failed to get container status \"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489\": rpc error: code = NotFound desc = could not find container \"96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489\": container with ID starting with 96d96375a3a4a263fed2137fcdcbe25d308c956493984d3ae9f2dd83f806d489 not found: ID does not exist" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.508248 4983 scope.go:117] "RemoveContainer" containerID="2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.511510 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd"} err="failed to get container status \"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd\": rpc error: code = NotFound desc = could not find container \"2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd\": container with ID starting with 2ef1cfa5f45247287a0d922fee814930038285ae8d7be2652366adc0ee12f3dd not found: ID does not exist" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548677 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548723 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-scripts\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548751 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548795 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548878 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-config-data\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548919 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548950 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whft\" (UniqueName: \"kubernetes.io/projected/9eb63554-a61c-478b-b626-83d825a75016-kube-api-access-4whft\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.548975 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-logs\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652103 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-config-data\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652196 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652281 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whft\" (UniqueName: \"kubernetes.io/projected/9eb63554-a61c-478b-b626-83d825a75016-kube-api-access-4whft\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652330 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-logs\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652363 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652413 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-scripts\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652465 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.652515 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.656317 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-logs\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.657368 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.658579 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.660326 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.662067 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.664508 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-config-data\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.666637 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-scripts\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.673222 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whft\" (UniqueName: \"kubernetes.io/projected/9eb63554-a61c-478b-b626-83d825a75016-kube-api-access-4whft\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.707738 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " pod="openstack/glance-default-external-api-0" Nov 25 20:43:44 crc kubenswrapper[4983]: I1125 20:43:44.797628 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.231036 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-768fff7bd9-zjc9n"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.273912 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-746b6775bd-26zqf"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.281375 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.289075 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-746b6775bd-26zqf"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.298150 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.321173 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.368737 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-scripts\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.368791 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-config-data\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.368834 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xcjg\" (UniqueName: \"kubernetes.io/projected/1ac04518-4a47-43b3-8e9f-84e8f3a80648-kube-api-access-2xcjg\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.368973 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-tls-certs\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.369066 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-combined-ca-bundle\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.369404 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-secret-key\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.369598 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac04518-4a47-43b3-8e9f-84e8f3a80648-logs\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.377108 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-794b99d65-vhntr"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.416825 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f9d7c8cfb-s259l"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.419280 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.433424 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9d7c8cfb-s259l"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473106 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-horizon-secret-key\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473162 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed474a92-4901-4ded-89c1-736427d72c92-logs\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473183 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed474a92-4901-4ded-89c1-736427d72c92-config-data\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473210 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-config-data\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473256 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xcjg\" (UniqueName: \"kubernetes.io/projected/1ac04518-4a47-43b3-8e9f-84e8f3a80648-kube-api-access-2xcjg\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473280 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-tls-certs\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473317 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-combined-ca-bundle\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473420 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed474a92-4901-4ded-89c1-736427d72c92-scripts\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473449 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-secret-key\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473580 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac04518-4a47-43b3-8e9f-84e8f3a80648-logs\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473613 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtm8\" (UniqueName: \"kubernetes.io/projected/ed474a92-4901-4ded-89c1-736427d72c92-kube-api-access-6qtm8\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473651 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-horizon-tls-certs\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473680 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-scripts\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.473696 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-combined-ca-bundle\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.475459 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-config-data\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.477271 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-scripts\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.488222 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac04518-4a47-43b3-8e9f-84e8f3a80648-logs\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.496365 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-combined-ca-bundle\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.496797 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-tls-certs\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.522022 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-secret-key\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.547614 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xcjg\" (UniqueName: \"kubernetes.io/projected/1ac04518-4a47-43b3-8e9f-84e8f3a80648-kube-api-access-2xcjg\") pod \"horizon-746b6775bd-26zqf\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581708 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed474a92-4901-4ded-89c1-736427d72c92-scripts\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581826 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qtm8\" (UniqueName: \"kubernetes.io/projected/ed474a92-4901-4ded-89c1-736427d72c92-kube-api-access-6qtm8\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581864 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-horizon-tls-certs\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581890 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-combined-ca-bundle\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581915 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-horizon-secret-key\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581940 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed474a92-4901-4ded-89c1-736427d72c92-logs\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.581961 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed474a92-4901-4ded-89c1-736427d72c92-config-data\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.583390 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ed474a92-4901-4ded-89c1-736427d72c92-config-data\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.584568 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ed474a92-4901-4ded-89c1-736427d72c92-scripts\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.599069 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed474a92-4901-4ded-89c1-736427d72c92-logs\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.602540 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-horizon-secret-key\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.607017 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-horizon-tls-certs\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.612260 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed474a92-4901-4ded-89c1-736427d72c92-combined-ca-bundle\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.619054 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qtm8\" (UniqueName: \"kubernetes.io/projected/ed474a92-4901-4ded-89c1-736427d72c92-kube-api-access-6qtm8\") pod \"horizon-7f9d7c8cfb-s259l\" (UID: \"ed474a92-4901-4ded-89c1-736427d72c92\") " pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.646014 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74acde52-084b-4768-9a68-ffb38827f3db" path="/var/lib/kubelet/pods/74acde52-084b-4768-9a68-ffb38827f3db/volumes" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.646929 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed34075b-ee17-40bd-a65e-e531201fb127" path="/var/lib/kubelet/pods/ed34075b-ee17-40bd-a65e-e531201fb127/volumes" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.647567 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.656591 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:43:45 crc kubenswrapper[4983]: I1125 20:43:45.659920 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.389261 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-746b6775bd-26zqf"] Nov 25 20:43:46 crc kubenswrapper[4983]: W1125 20:43:46.397236 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ac04518_4a47_43b3_8e9f_84e8f3a80648.slice/crio-f0aa031ea4dce9deb3b2d4c187a122b147cdaa498a949d77e53e52a304b684d7 WatchSource:0}: Error finding container f0aa031ea4dce9deb3b2d4c187a122b147cdaa498a949d77e53e52a304b684d7: Status 404 returned error can't find the container with id f0aa031ea4dce9deb3b2d4c187a122b147cdaa498a949d77e53e52a304b684d7 Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.399575 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22307b23-1606-4b31-8f0d-d24b999df93d","Type":"ContainerStarted","Data":"8bf8ef8946116740b6988388de6bbabc4660bbc05b70481110d1c9f1d33b4f7e"} Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.399696 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-log" containerID="cri-o://f37a4aa7b754b7c363a33df467750d063cd8b66e6802005e8f322e1b4a817b64" gracePeriod=30 Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.399760 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-httpd" containerID="cri-o://8bf8ef8946116740b6988388de6bbabc4660bbc05b70481110d1c9f1d33b4f7e" gracePeriod=30 Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.402082 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9eb63554-a61c-478b-b626-83d825a75016","Type":"ContainerStarted","Data":"88c987c910d285840b9b96d34dd200f30126ed02ebedbe932d7f96c7df9dfe07"} Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.431227 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.431200518 podStartE2EDuration="10.431200518s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:46.426259717 +0000 UTC m=+1007.538793129" watchObservedRunningTime="2025-11-25 20:43:46.431200518 +0000 UTC m=+1007.543733910" Nov 25 20:43:46 crc kubenswrapper[4983]: I1125 20:43:46.485568 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f9d7c8cfb-s259l"] Nov 25 20:43:46 crc kubenswrapper[4983]: W1125 20:43:46.503834 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded474a92_4901_4ded_89c1_736427d72c92.slice/crio-16b3fee235b2e35efb221aa7a07906e5b2b5aa3a8d7ebd2ce39fa93ac5692a95 WatchSource:0}: Error finding container 16b3fee235b2e35efb221aa7a07906e5b2b5aa3a8d7ebd2ce39fa93ac5692a95: Status 404 returned error can't find the container with id 16b3fee235b2e35efb221aa7a07906e5b2b5aa3a8d7ebd2ce39fa93ac5692a95 Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.432798 4983 generic.go:334] "Generic (PLEG): container finished" podID="22307b23-1606-4b31-8f0d-d24b999df93d" containerID="8bf8ef8946116740b6988388de6bbabc4660bbc05b70481110d1c9f1d33b4f7e" exitCode=0 Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.434352 4983 generic.go:334] "Generic (PLEG): container finished" podID="22307b23-1606-4b31-8f0d-d24b999df93d" containerID="f37a4aa7b754b7c363a33df467750d063cd8b66e6802005e8f322e1b4a817b64" exitCode=143 Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.433028 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22307b23-1606-4b31-8f0d-d24b999df93d","Type":"ContainerDied","Data":"8bf8ef8946116740b6988388de6bbabc4660bbc05b70481110d1c9f1d33b4f7e"} Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.434467 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22307b23-1606-4b31-8f0d-d24b999df93d","Type":"ContainerDied","Data":"f37a4aa7b754b7c363a33df467750d063cd8b66e6802005e8f322e1b4a817b64"} Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.437412 4983 generic.go:334] "Generic (PLEG): container finished" podID="53cf8b10-7201-49c1-8f2b-ce63f211b469" containerID="8f2ad1959c96277a0ef16b1585edeb1c2106f497c9d7f3deb9228f4b31ad7af6" exitCode=0 Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.437492 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p6dsc" event={"ID":"53cf8b10-7201-49c1-8f2b-ce63f211b469","Type":"ContainerDied","Data":"8f2ad1959c96277a0ef16b1585edeb1c2106f497c9d7f3deb9228f4b31ad7af6"} Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.441736 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9eb63554-a61c-478b-b626-83d825a75016","Type":"ContainerStarted","Data":"c5542ddbdd9433e9069ef5fc22b65818a6af9036f7642e72f04669b7e0927c82"} Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.448012 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746b6775bd-26zqf" event={"ID":"1ac04518-4a47-43b3-8e9f-84e8f3a80648","Type":"ContainerStarted","Data":"f0aa031ea4dce9deb3b2d4c187a122b147cdaa498a949d77e53e52a304b684d7"} Nov 25 20:43:47 crc kubenswrapper[4983]: I1125 20:43:47.454090 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d7c8cfb-s259l" event={"ID":"ed474a92-4901-4ded-89c1-736427d72c92","Type":"ContainerStarted","Data":"16b3fee235b2e35efb221aa7a07906e5b2b5aa3a8d7ebd2ce39fa93ac5692a95"} Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.503195 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p6dsc" event={"ID":"53cf8b10-7201-49c1-8f2b-ce63f211b469","Type":"ContainerDied","Data":"f1a2531d5a01d305701e7c10eb82ed23528e2ae08af238c1734d6d8262a588b2"} Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.504029 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a2531d5a01d305701e7c10eb82ed23528e2ae08af238c1734d6d8262a588b2" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.506156 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"22307b23-1606-4b31-8f0d-d24b999df93d","Type":"ContainerDied","Data":"e7328626b3bc6489a329661bc15b3bbc7694322dcf4ae6c4a11dff0437afc0e6"} Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.506214 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7328626b3bc6489a329661bc15b3bbc7694322dcf4ae6c4a11dff0437afc0e6" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.549286 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.571323 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.655638 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-scripts\") pod \"53cf8b10-7201-49c1-8f2b-ce63f211b469\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.656132 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-config-data\") pod \"53cf8b10-7201-49c1-8f2b-ce63f211b469\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.656209 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-credential-keys\") pod \"53cf8b10-7201-49c1-8f2b-ce63f211b469\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.656251 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-fernet-keys\") pod \"53cf8b10-7201-49c1-8f2b-ce63f211b469\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.658005 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-combined-ca-bundle\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.658048 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-internal-tls-certs\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.658078 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-combined-ca-bundle\") pod \"53cf8b10-7201-49c1-8f2b-ce63f211b469\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.658118 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kskm8\" (UniqueName: \"kubernetes.io/projected/53cf8b10-7201-49c1-8f2b-ce63f211b469-kube-api-access-kskm8\") pod \"53cf8b10-7201-49c1-8f2b-ce63f211b469\" (UID: \"53cf8b10-7201-49c1-8f2b-ce63f211b469\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.672768 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "53cf8b10-7201-49c1-8f2b-ce63f211b469" (UID: "53cf8b10-7201-49c1-8f2b-ce63f211b469"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.672879 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-scripts" (OuterVolumeSpecName: "scripts") pod "53cf8b10-7201-49c1-8f2b-ce63f211b469" (UID: "53cf8b10-7201-49c1-8f2b-ce63f211b469"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.672881 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cf8b10-7201-49c1-8f2b-ce63f211b469-kube-api-access-kskm8" (OuterVolumeSpecName: "kube-api-access-kskm8") pod "53cf8b10-7201-49c1-8f2b-ce63f211b469" (UID: "53cf8b10-7201-49c1-8f2b-ce63f211b469"). InnerVolumeSpecName "kube-api-access-kskm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.672937 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "53cf8b10-7201-49c1-8f2b-ce63f211b469" (UID: "53cf8b10-7201-49c1-8f2b-ce63f211b469"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.700845 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-config-data" (OuterVolumeSpecName: "config-data") pod "53cf8b10-7201-49c1-8f2b-ce63f211b469" (UID: "53cf8b10-7201-49c1-8f2b-ce63f211b469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.703846 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53cf8b10-7201-49c1-8f2b-ce63f211b469" (UID: "53cf8b10-7201-49c1-8f2b-ce63f211b469"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.709367 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.721611 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.759064 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-scripts\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.759136 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-logs\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.759170 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.759203 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-config-data\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.759281 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-httpd-run\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.759301 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkcfh\" (UniqueName: \"kubernetes.io/projected/22307b23-1606-4b31-8f0d-d24b999df93d-kube-api-access-kkcfh\") pod \"22307b23-1606-4b31-8f0d-d24b999df93d\" (UID: \"22307b23-1606-4b31-8f0d-d24b999df93d\") " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760046 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760052 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760066 4983 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760099 4983 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760286 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760303 4983 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760312 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760322 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kskm8\" (UniqueName: \"kubernetes.io/projected/53cf8b10-7201-49c1-8f2b-ce63f211b469-kube-api-access-kskm8\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760331 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53cf8b10-7201-49c1-8f2b-ce63f211b469-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.760230 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-logs" (OuterVolumeSpecName: "logs") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.765787 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.768164 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-scripts" (OuterVolumeSpecName: "scripts") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.772815 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22307b23-1606-4b31-8f0d-d24b999df93d-kube-api-access-kkcfh" (OuterVolumeSpecName: "kube-api-access-kkcfh") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "kube-api-access-kkcfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.867024 4983 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.867418 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkcfh\" (UniqueName: \"kubernetes.io/projected/22307b23-1606-4b31-8f0d-d24b999df93d-kube-api-access-kkcfh\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.867431 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.867444 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22307b23-1606-4b31-8f0d-d24b999df93d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.867469 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.869022 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-config-data" (OuterVolumeSpecName: "config-data") pod "22307b23-1606-4b31-8f0d-d24b999df93d" (UID: "22307b23-1606-4b31-8f0d-d24b999df93d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.902089 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.969372 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:51 crc kubenswrapper[4983]: I1125 20:43:51.969399 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22307b23-1606-4b31-8f0d-d24b999df93d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.073713 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.133243 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5m84b"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.133688 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" containerID="cri-o://b1e16be10d893541defc632a36c2712e2a325224e052d2f75333f7a05717a29a" gracePeriod=10 Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.525258 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9eb63554-a61c-478b-b626-83d825a75016","Type":"ContainerStarted","Data":"460d3e074933817786a1d21a6371e349c1b72bdb52bf9295edfc1bb2b3eb1117"} Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.525427 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-log" containerID="cri-o://c5542ddbdd9433e9069ef5fc22b65818a6af9036f7642e72f04669b7e0927c82" gracePeriod=30 Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.525792 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-httpd" containerID="cri-o://460d3e074933817786a1d21a6371e349c1b72bdb52bf9295edfc1bb2b3eb1117" gracePeriod=30 Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.540777 4983 generic.go:334] "Generic (PLEG): container finished" podID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerID="b1e16be10d893541defc632a36c2712e2a325224e052d2f75333f7a05717a29a" exitCode=0 Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.540884 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p6dsc" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.542419 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" event={"ID":"f58c94e7-0496-482c-aa85-079d35d0bd31","Type":"ContainerDied","Data":"b1e16be10d893541defc632a36c2712e2a325224e052d2f75333f7a05717a29a"} Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.542584 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.561229 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.561207266 podStartE2EDuration="8.561207266s" podCreationTimestamp="2025-11-25 20:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:43:52.553908133 +0000 UTC m=+1013.666441535" watchObservedRunningTime="2025-11-25 20:43:52.561207266 +0000 UTC m=+1013.673740658" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.606620 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.614247 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637034 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:52 crc kubenswrapper[4983]: E1125 20:43:52.637434 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-log" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637456 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-log" Nov 25 20:43:52 crc kubenswrapper[4983]: E1125 20:43:52.637481 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cf8b10-7201-49c1-8f2b-ce63f211b469" containerName="keystone-bootstrap" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637488 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cf8b10-7201-49c1-8f2b-ce63f211b469" containerName="keystone-bootstrap" Nov 25 20:43:52 crc kubenswrapper[4983]: E1125 20:43:52.637501 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-httpd" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637507 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-httpd" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637946 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-httpd" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637973 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cf8b10-7201-49c1-8f2b-ce63f211b469" containerName="keystone-bootstrap" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.637989 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" containerName="glance-log" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.638936 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.646868 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.646927 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.694245 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.762619 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p6dsc"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.788775 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p6dsc"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.805977 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806071 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806103 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806130 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806145 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806169 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4k8\" (UniqueName: \"kubernetes.io/projected/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-kube-api-access-mp4k8\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.806191 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.829379 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hjjss"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.830906 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.841240 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.841475 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4pfh" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.841650 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.841805 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.842343 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.873852 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjjss"] Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907627 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907666 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907705 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907731 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907746 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907771 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp4k8\" (UniqueName: \"kubernetes.io/projected/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-kube-api-access-mp4k8\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907788 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.907865 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.910390 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-logs\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.927975 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.932775 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.947670 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.948326 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.950674 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:52 crc kubenswrapper[4983]: I1125 20:43:52.990338 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.000469 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp4k8\" (UniqueName: \"kubernetes.io/projected/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-kube-api-access-mp4k8\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.010459 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-combined-ca-bundle\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.010515 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-scripts\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.010605 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-credential-keys\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.010673 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-config-data\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.010713 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9gss\" (UniqueName: \"kubernetes.io/projected/fb577055-b6b9-4559-9f67-2253439acfc7-kube-api-access-w9gss\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.010778 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-fernet-keys\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.112533 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-config-data\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.112627 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9gss\" (UniqueName: \"kubernetes.io/projected/fb577055-b6b9-4559-9f67-2253439acfc7-kube-api-access-w9gss\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.112671 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-fernet-keys\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.112738 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-combined-ca-bundle\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.112765 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-scripts\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.112789 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-credential-keys\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.116410 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-credential-keys\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.117744 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-scripts\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.118137 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-fernet-keys\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.120769 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-combined-ca-bundle\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.131801 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-config-data\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.133501 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9gss\" (UniqueName: \"kubernetes.io/projected/fb577055-b6b9-4559-9f67-2253439acfc7-kube-api-access-w9gss\") pod \"keystone-bootstrap-hjjss\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.162930 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.196836 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.329018 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.560814 4983 generic.go:334] "Generic (PLEG): container finished" podID="9eb63554-a61c-478b-b626-83d825a75016" containerID="460d3e074933817786a1d21a6371e349c1b72bdb52bf9295edfc1bb2b3eb1117" exitCode=0 Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.561161 4983 generic.go:334] "Generic (PLEG): container finished" podID="9eb63554-a61c-478b-b626-83d825a75016" containerID="c5542ddbdd9433e9069ef5fc22b65818a6af9036f7642e72f04669b7e0927c82" exitCode=143 Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.560935 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9eb63554-a61c-478b-b626-83d825a75016","Type":"ContainerDied","Data":"460d3e074933817786a1d21a6371e349c1b72bdb52bf9295edfc1bb2b3eb1117"} Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.561203 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9eb63554-a61c-478b-b626-83d825a75016","Type":"ContainerDied","Data":"c5542ddbdd9433e9069ef5fc22b65818a6af9036f7642e72f04669b7e0927c82"} Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.618768 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22307b23-1606-4b31-8f0d-d24b999df93d" path="/var/lib/kubelet/pods/22307b23-1606-4b31-8f0d-d24b999df93d/volumes" Nov 25 20:43:53 crc kubenswrapper[4983]: I1125 20:43:53.620137 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cf8b10-7201-49c1-8f2b-ce63f211b469" path="/var/lib/kubelet/pods/53cf8b10-7201-49c1-8f2b-ce63f211b469/volumes" Nov 25 20:43:55 crc kubenswrapper[4983]: I1125 20:43:55.252333 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Nov 25 20:43:56 crc kubenswrapper[4983]: E1125 20:43:56.979852 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 25 20:43:56 crc kubenswrapper[4983]: E1125 20:43:56.980543 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n547h5f7h67dh559h678h588hdh7dh87hfbhb5h8dh9hf7h65dh595h5bbh567h56bh75h64fh688h577h548h74h5f6hbch95h54fhf4h564h85q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q765t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(4fa168af-421e-4a45-8201-13eb69a20830): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:00 crc kubenswrapper[4983]: I1125 20:44:00.252700 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.101333 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.102044 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n98h657h59fh57fh599h587h695h7bhbbh5f7h59dh68h676h689hdbh6dh656h7h85h597h658h557h66dhcbh65hcbhbfh58ch7bhf9h667h59dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h78pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-649c7d5f69-gqrc7_openstack(bd9b0d36-cf2a-4218-88ef-fd0d74bb603e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.107222 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-649c7d5f69-gqrc7" podUID="bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.117014 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.117276 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n559h548h77h5fh97hb9h95h5c4h697h678h568h65dhbdh577h5b7h5f4h6bhf7h5cch5cfh5bh554h5dbh5ffh594h689h577h674hd8h9bh689h5d8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qtm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7f9d7c8cfb-s259l_openstack(ed474a92-4901-4ded-89c1-736427d72c92): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.120054 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7f9d7c8cfb-s259l" podUID="ed474a92-4901-4ded-89c1-736427d72c92" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.136191 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.136505 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6dh54bh5ffh9hf9hdfh5fch57chf5h554h657h696h596h58bh55bh69h694h5ch55h696hb5h8h656h5dbhfbh558hf5h675h5c9hb9hbdh5b5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tphnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-794b99d65-vhntr_openstack(6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.138885 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-794b99d65-vhntr" podUID="6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.156180 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.156435 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncchb4h584h5bch75h554h554h4h586h56fh598h5b4hb4h67fh8dhddh685h64dh67fh577hb8h665h5f7h9bh8bhcch7h5bch7ch575h57fh89q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xcjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-746b6775bd-26zqf_openstack(1ac04518-4a47-43b3-8e9f-84e8f3a80648): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.159601 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.208425 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.208625 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n576h588h68dh57h65bhb7h66fh5f9h64h59fh548hffh5ddh697h76h57hc4h656hc6h595h57bhdbh66chbbh575h66bh5d6h67hc5h9h59ch67cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vx274,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-768fff7bd9-zjc9n_openstack(6ed84598-dc8c-4060-b762-1c8240ed61fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.211577 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-768fff7bd9-zjc9n" podUID="6ed84598-dc8c-4060-b762-1c8240ed61fa" Nov 25 20:44:02 crc kubenswrapper[4983]: I1125 20:44:02.685980 4983 generic.go:334] "Generic (PLEG): container finished" podID="18b7eb10-6c61-469a-87c6-d263f94dce5d" containerID="9f184195a370bb71623b3b0bafcf2536a9dffdaf9b4bba924fc04637ec784061" exitCode=0 Nov 25 20:44:02 crc kubenswrapper[4983]: I1125 20:44:02.686192 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sfn6q" event={"ID":"18b7eb10-6c61-469a-87c6-d263f94dce5d","Type":"ContainerDied","Data":"9f184195a370bb71623b3b0bafcf2536a9dffdaf9b4bba924fc04637ec784061"} Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.689167 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" Nov 25 20:44:02 crc kubenswrapper[4983]: E1125 20:44:02.689482 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7f9d7c8cfb-s259l" podUID="ed474a92-4901-4ded-89c1-736427d72c92" Nov 25 20:44:05 crc kubenswrapper[4983]: I1125 20:44:05.253287 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Nov 25 20:44:05 crc kubenswrapper[4983]: I1125 20:44:05.254407 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:44:09 crc kubenswrapper[4983]: I1125 20:44:09.927855 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:44:09 crc kubenswrapper[4983]: I1125 20:44:09.928523 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:44:10 crc kubenswrapper[4983]: I1125 20:44:10.253707 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Nov 25 20:44:12 crc kubenswrapper[4983]: E1125 20:44:12.836352 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 25 20:44:12 crc kubenswrapper[4983]: E1125 20:44:12.838697 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rh8fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h4hwg_openstack(24119f4e-9bb9-4f12-a031-03ec811465d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:12 crc kubenswrapper[4983]: E1125 20:44:12.839921 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h4hwg" podUID="24119f4e-9bb9-4f12-a031-03ec811465d1" Nov 25 20:44:12 crc kubenswrapper[4983]: I1125 20:44:12.973502 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:44:12 crc kubenswrapper[4983]: I1125 20:44:12.983079 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:44:12 crc kubenswrapper[4983]: I1125 20:44:12.990406 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.008015 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.102856 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ed84598-dc8c-4060-b762-1c8240ed61fa-horizon-secret-key\") pod \"6ed84598-dc8c-4060-b762-1c8240ed61fa\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103263 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-config-data\") pod \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103329 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed84598-dc8c-4060-b762-1c8240ed61fa-logs\") pod \"6ed84598-dc8c-4060-b762-1c8240ed61fa\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103389 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-scripts\") pod \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103432 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-config\") pod \"18b7eb10-6c61-469a-87c6-d263f94dce5d\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103461 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-scripts\") pod \"6ed84598-dc8c-4060-b762-1c8240ed61fa\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103541 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-horizon-secret-key\") pod \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103607 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tphnx\" (UniqueName: \"kubernetes.io/projected/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-kube-api-access-tphnx\") pod \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103702 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx274\" (UniqueName: \"kubernetes.io/projected/6ed84598-dc8c-4060-b762-1c8240ed61fa-kube-api-access-vx274\") pod \"6ed84598-dc8c-4060-b762-1c8240ed61fa\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103767 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-scripts\") pod \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103811 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-horizon-secret-key\") pod \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103841 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-config-data\") pod \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103905 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffnvf\" (UniqueName: \"kubernetes.io/projected/18b7eb10-6c61-469a-87c6-d263f94dce5d-kube-api-access-ffnvf\") pod \"18b7eb10-6c61-469a-87c6-d263f94dce5d\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.103934 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-combined-ca-bundle\") pod \"18b7eb10-6c61-469a-87c6-d263f94dce5d\" (UID: \"18b7eb10-6c61-469a-87c6-d263f94dce5d\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.104014 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-logs\") pod \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.104095 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-config-data\") pod \"6ed84598-dc8c-4060-b762-1c8240ed61fa\" (UID: \"6ed84598-dc8c-4060-b762-1c8240ed61fa\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.104146 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h78pr\" (UniqueName: \"kubernetes.io/projected/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-kube-api-access-h78pr\") pod \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\" (UID: \"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.104224 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-logs\") pod \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\" (UID: \"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b\") " Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.105869 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-logs" (OuterVolumeSpecName: "logs") pod "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" (UID: "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.109539 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-logs" (OuterVolumeSpecName: "logs") pod "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" (UID: "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.109589 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-config-data" (OuterVolumeSpecName: "config-data") pod "6ed84598-dc8c-4060-b762-1c8240ed61fa" (UID: "6ed84598-dc8c-4060-b762-1c8240ed61fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.111942 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-config-data" (OuterVolumeSpecName: "config-data") pod "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" (UID: "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.111973 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-scripts" (OuterVolumeSpecName: "scripts") pod "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" (UID: "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.112625 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ed84598-dc8c-4060-b762-1c8240ed61fa-logs" (OuterVolumeSpecName: "logs") pod "6ed84598-dc8c-4060-b762-1c8240ed61fa" (UID: "6ed84598-dc8c-4060-b762-1c8240ed61fa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.113453 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-config-data" (OuterVolumeSpecName: "config-data") pod "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" (UID: "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.115159 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" (UID: "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.116027 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed84598-dc8c-4060-b762-1c8240ed61fa-kube-api-access-vx274" (OuterVolumeSpecName: "kube-api-access-vx274") pod "6ed84598-dc8c-4060-b762-1c8240ed61fa" (UID: "6ed84598-dc8c-4060-b762-1c8240ed61fa"). InnerVolumeSpecName "kube-api-access-vx274". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.116605 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-scripts" (OuterVolumeSpecName: "scripts") pod "6ed84598-dc8c-4060-b762-1c8240ed61fa" (UID: "6ed84598-dc8c-4060-b762-1c8240ed61fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.116713 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ed84598-dc8c-4060-b762-1c8240ed61fa-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6ed84598-dc8c-4060-b762-1c8240ed61fa" (UID: "6ed84598-dc8c-4060-b762-1c8240ed61fa"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.121480 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" (UID: "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.125070 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-kube-api-access-h78pr" (OuterVolumeSpecName: "kube-api-access-h78pr") pod "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" (UID: "bd9b0d36-cf2a-4218-88ef-fd0d74bb603e"). InnerVolumeSpecName "kube-api-access-h78pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.126217 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-scripts" (OuterVolumeSpecName: "scripts") pod "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" (UID: "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.127881 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-kube-api-access-tphnx" (OuterVolumeSpecName: "kube-api-access-tphnx") pod "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" (UID: "6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b"). InnerVolumeSpecName "kube-api-access-tphnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.128765 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b7eb10-6c61-469a-87c6-d263f94dce5d-kube-api-access-ffnvf" (OuterVolumeSpecName: "kube-api-access-ffnvf") pod "18b7eb10-6c61-469a-87c6-d263f94dce5d" (UID: "18b7eb10-6c61-469a-87c6-d263f94dce5d"). InnerVolumeSpecName "kube-api-access-ffnvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.142517 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18b7eb10-6c61-469a-87c6-d263f94dce5d" (UID: "18b7eb10-6c61-469a-87c6-d263f94dce5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.151602 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-config" (OuterVolumeSpecName: "config") pod "18b7eb10-6c61-469a-87c6-d263f94dce5d" (UID: "18b7eb10-6c61-469a-87c6-d263f94dce5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.206980 4983 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207025 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tphnx\" (UniqueName: \"kubernetes.io/projected/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-kube-api-access-tphnx\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207060 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx274\" (UniqueName: \"kubernetes.io/projected/6ed84598-dc8c-4060-b762-1c8240ed61fa-kube-api-access-vx274\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207073 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207084 4983 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207093 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207103 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffnvf\" (UniqueName: \"kubernetes.io/projected/18b7eb10-6c61-469a-87c6-d263f94dce5d-kube-api-access-ffnvf\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207130 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207139 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207148 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207156 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h78pr\" (UniqueName: \"kubernetes.io/projected/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e-kube-api-access-h78pr\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207165 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207175 4983 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ed84598-dc8c-4060-b762-1c8240ed61fa-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207183 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207210 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ed84598-dc8c-4060-b762-1c8240ed61fa-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207220 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207228 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/18b7eb10-6c61-469a-87c6-d263f94dce5d-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.207236 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ed84598-dc8c-4060-b762-1c8240ed61fa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.842770 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-sfn6q" event={"ID":"18b7eb10-6c61-469a-87c6-d263f94dce5d","Type":"ContainerDied","Data":"16d64e8f48157ebf17f9764cb9c391833e46ecacb05d151bd202762174947c38"} Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.842819 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d64e8f48157ebf17f9764cb9c391833e46ecacb05d151bd202762174947c38" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.842865 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-sfn6q" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.845141 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-768fff7bd9-zjc9n" event={"ID":"6ed84598-dc8c-4060-b762-1c8240ed61fa","Type":"ContainerDied","Data":"96330f7326c19c2d52e0d608e79567d073e40c1c9ceba871b8e6b48a27f853bd"} Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.845162 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-768fff7bd9-zjc9n" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.847825 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-649c7d5f69-gqrc7" event={"ID":"bd9b0d36-cf2a-4218-88ef-fd0d74bb603e","Type":"ContainerDied","Data":"fb6e86290dd6ecf6ad5a0b8e77e9be327f6a5630500a79c827b059e943b57016"} Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.847879 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-649c7d5f69-gqrc7" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.852016 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-794b99d65-vhntr" event={"ID":"6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b","Type":"ContainerDied","Data":"1518f752a8ed2172f96da8fa95ff35f3bf2bf5cbfbb2a490109382217d75493d"} Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.852054 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-794b99d65-vhntr" Nov 25 20:44:13 crc kubenswrapper[4983]: E1125 20:44:13.856251 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h4hwg" podUID="24119f4e-9bb9-4f12-a031-03ec811465d1" Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.915279 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-768fff7bd9-zjc9n"] Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.927043 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-768fff7bd9-zjc9n"] Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.967664 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-649c7d5f69-gqrc7"] Nov 25 20:44:13 crc kubenswrapper[4983]: I1125 20:44:13.981292 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-649c7d5f69-gqrc7"] Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.003384 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-794b99d65-vhntr"] Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.011500 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-794b99d65-vhntr"] Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.212217 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t98c5"] Nov 25 20:44:14 crc kubenswrapper[4983]: E1125 20:44:14.212625 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b7eb10-6c61-469a-87c6-d263f94dce5d" containerName="neutron-db-sync" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.212639 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b7eb10-6c61-469a-87c6-d263f94dce5d" containerName="neutron-db-sync" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.212823 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b7eb10-6c61-469a-87c6-d263f94dce5d" containerName="neutron-db-sync" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.213845 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.250721 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t98c5"] Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.329126 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.329204 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.329232 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.329288 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.329364 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-config\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.329390 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvp8v\" (UniqueName: \"kubernetes.io/projected/850af343-20d4-4033-9414-c22c6a180ffa-kube-api-access-bvp8v\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.354593 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f4b565868-4nbfx"] Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.360698 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.363487 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.363770 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qs9rd" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.363912 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.364063 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.383162 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f4b565868-4nbfx"] Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.430865 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.430959 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-config\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.430986 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvp8v\" (UniqueName: \"kubernetes.io/projected/850af343-20d4-4033-9414-c22c6a180ffa-kube-api-access-bvp8v\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.431028 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.431055 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.431077 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.431918 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.432416 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.432931 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-config\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.433791 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.434287 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-svc\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.463256 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvp8v\" (UniqueName: \"kubernetes.io/projected/850af343-20d4-4033-9414-c22c6a180ffa-kube-api-access-bvp8v\") pod \"dnsmasq-dns-55f844cf75-t98c5\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.532898 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-ovndb-tls-certs\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.533005 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-combined-ca-bundle\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.533123 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-config\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.533201 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-httpd-config\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.533441 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvzb\" (UniqueName: \"kubernetes.io/projected/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-kube-api-access-qxvzb\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.546687 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.634965 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-ovndb-tls-certs\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.635078 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-combined-ca-bundle\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.635158 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-config\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.635186 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-httpd-config\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.635249 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxvzb\" (UniqueName: \"kubernetes.io/projected/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-kube-api-access-qxvzb\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.639923 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-ovndb-tls-certs\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.641430 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-httpd-config\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.641936 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-config\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.642585 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-combined-ca-bundle\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.657390 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxvzb\" (UniqueName: \"kubernetes.io/projected/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-kube-api-access-qxvzb\") pod \"neutron-7f4b565868-4nbfx\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.683576 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.798280 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:44:14 crc kubenswrapper[4983]: I1125 20:44:14.798329 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:44:15 crc kubenswrapper[4983]: E1125 20:44:15.198417 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 20:44:15 crc kubenswrapper[4983]: E1125 20:44:15.199056 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rj4wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7dskv_openstack(cca9d2b3-2f79-4d38-8427-f5bfae9fc314): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 20:44:15 crc kubenswrapper[4983]: E1125 20:44:15.200150 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7dskv" podUID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.294966 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.298716 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.455768 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-sb\") pod \"f58c94e7-0496-482c-aa85-079d35d0bd31\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.455861 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-nb\") pod \"f58c94e7-0496-482c-aa85-079d35d0bd31\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.455922 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-config-data\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456005 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-svc\") pod \"f58c94e7-0496-482c-aa85-079d35d0bd31\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456044 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-combined-ca-bundle\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456074 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whft\" (UniqueName: \"kubernetes.io/projected/9eb63554-a61c-478b-b626-83d825a75016-kube-api-access-4whft\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456110 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr2jd\" (UniqueName: \"kubernetes.io/projected/f58c94e7-0496-482c-aa85-079d35d0bd31-kube-api-access-gr2jd\") pod \"f58c94e7-0496-482c-aa85-079d35d0bd31\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456150 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-config\") pod \"f58c94e7-0496-482c-aa85-079d35d0bd31\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456181 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-swift-storage-0\") pod \"f58c94e7-0496-482c-aa85-079d35d0bd31\" (UID: \"f58c94e7-0496-482c-aa85-079d35d0bd31\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456299 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-public-tls-certs\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456403 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-logs\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456443 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-scripts\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456479 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-httpd-run\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.456506 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"9eb63554-a61c-478b-b626-83d825a75016\" (UID: \"9eb63554-a61c-478b-b626-83d825a75016\") " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.459711 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.459786 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-logs" (OuterVolumeSpecName: "logs") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.461970 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f58c94e7-0496-482c-aa85-079d35d0bd31-kube-api-access-gr2jd" (OuterVolumeSpecName: "kube-api-access-gr2jd") pod "f58c94e7-0496-482c-aa85-079d35d0bd31" (UID: "f58c94e7-0496-482c-aa85-079d35d0bd31"). InnerVolumeSpecName "kube-api-access-gr2jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.462540 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.465291 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-scripts" (OuterVolumeSpecName: "scripts") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.466757 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb63554-a61c-478b-b626-83d825a75016-kube-api-access-4whft" (OuterVolumeSpecName: "kube-api-access-4whft") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "kube-api-access-4whft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.506735 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.516432 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f58c94e7-0496-482c-aa85-079d35d0bd31" (UID: "f58c94e7-0496-482c-aa85-079d35d0bd31"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.520361 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f58c94e7-0496-482c-aa85-079d35d0bd31" (UID: "f58c94e7-0496-482c-aa85-079d35d0bd31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.546208 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f58c94e7-0496-482c-aa85-079d35d0bd31" (UID: "f58c94e7-0496-482c-aa85-079d35d0bd31"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559263 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559294 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559304 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559314 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whft\" (UniqueName: \"kubernetes.io/projected/9eb63554-a61c-478b-b626-83d825a75016-kube-api-access-4whft\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559326 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr2jd\" (UniqueName: \"kubernetes.io/projected/f58c94e7-0496-482c-aa85-079d35d0bd31-kube-api-access-gr2jd\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559336 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559344 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559351 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559359 4983 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9eb63554-a61c-478b-b626-83d825a75016-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.559379 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.562419 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-config" (OuterVolumeSpecName: "config") pod "f58c94e7-0496-482c-aa85-079d35d0bd31" (UID: "f58c94e7-0496-482c-aa85-079d35d0bd31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.563260 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f58c94e7-0496-482c-aa85-079d35d0bd31" (UID: "f58c94e7-0496-482c-aa85-079d35d0bd31"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.563367 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-config-data" (OuterVolumeSpecName: "config-data") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.574375 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9eb63554-a61c-478b-b626-83d825a75016" (UID: "9eb63554-a61c-478b-b626-83d825a75016"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.583646 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.620159 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b" path="/var/lib/kubelet/pods/6e8c5bd6-b1fe-4efe-a9bf-8567490cf09b/volumes" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.620667 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed84598-dc8c-4060-b762-1c8240ed61fa" path="/var/lib/kubelet/pods/6ed84598-dc8c-4060-b762-1c8240ed61fa/volumes" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.621123 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9b0d36-cf2a-4218-88ef-fd0d74bb603e" path="/var/lib/kubelet/pods/bd9b0d36-cf2a-4218-88ef-fd0d74bb603e/volumes" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.661188 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.661228 4983 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.661241 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.661251 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f58c94e7-0496-482c-aa85-079d35d0bd31-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.661264 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb63554-a61c-478b-b626-83d825a75016-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.923944 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9eb63554-a61c-478b-b626-83d825a75016","Type":"ContainerDied","Data":"88c987c910d285840b9b96d34dd200f30126ed02ebedbe932d7f96c7df9dfe07"} Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.933957 4983 scope.go:117] "RemoveContainer" containerID="460d3e074933817786a1d21a6371e349c1b72bdb52bf9295edfc1bb2b3eb1117" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.929082 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.991229 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" Nov 25 20:44:15 crc kubenswrapper[4983]: I1125 20:44:15.991966 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" event={"ID":"f58c94e7-0496-482c-aa85-079d35d0bd31","Type":"ContainerDied","Data":"3932098fb5acb3b128e61efd10cd3fe1e18d6e063177f57df6e2ae563282ab40"} Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.005651 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:44:16 crc kubenswrapper[4983]: E1125 20:44:16.015488 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7dskv" podUID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.016531 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.053505 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:44:16 crc kubenswrapper[4983]: E1125 20:44:16.054026 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-log" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054048 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-log" Nov 25 20:44:16 crc kubenswrapper[4983]: E1125 20:44:16.054078 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-httpd" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054086 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-httpd" Nov 25 20:44:16 crc kubenswrapper[4983]: E1125 20:44:16.054098 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="init" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054106 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="init" Nov 25 20:44:16 crc kubenswrapper[4983]: E1125 20:44:16.054128 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054136 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054312 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054330 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-log" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.054342 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb63554-a61c-478b-b626-83d825a75016" containerName="glance-httpd" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.055375 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.060198 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.060496 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.173635 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174160 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-logs\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174216 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174250 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwn9m\" (UniqueName: \"kubernetes.io/projected/dc01c1b2-f944-418e-93e6-3022566892b5-kube-api-access-wwn9m\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174303 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174392 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174418 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.174437 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.176878 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.204924 4983 scope.go:117] "RemoveContainer" containerID="c5542ddbdd9433e9069ef5fc22b65818a6af9036f7642e72f04669b7e0927c82" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.240509 4983 scope.go:117] "RemoveContainer" containerID="b1e16be10d893541defc632a36c2712e2a325224e052d2f75333f7a05717a29a" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.262997 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5m84b"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282216 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282288 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwn9m\" (UniqueName: \"kubernetes.io/projected/dc01c1b2-f944-418e-93e6-3022566892b5-kube-api-access-wwn9m\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282371 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282479 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282511 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282532 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282585 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.282624 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-logs\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.283196 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-logs\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.283832 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.288322 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.298261 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.301269 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.310737 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.311413 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.317493 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5m84b"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.327800 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwn9m\" (UniqueName: \"kubernetes.io/projected/dc01c1b2-f944-418e-93e6-3022566892b5-kube-api-access-wwn9m\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.344705 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjjss"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.352861 4983 scope.go:117] "RemoveContainer" containerID="d26bb9acc5feaf9274d27b3957f50b1c7d68558b45d7c64729a3a6bfd17c6a7f" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.367629 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.420109 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.433763 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t98c5"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.434426 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:44:16 crc kubenswrapper[4983]: W1125 20:44:16.456064 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod850af343_20d4_4033_9414_c22c6a180ffa.slice/crio-43f95792276c5d7a744a43570a6d7fdc6afa8bd7d603bae8284333b63d1fa9f2 WatchSource:0}: Error finding container 43f95792276c5d7a744a43570a6d7fdc6afa8bd7d603bae8284333b63d1fa9f2: Status 404 returned error can't find the container with id 43f95792276c5d7a744a43570a6d7fdc6afa8bd7d603bae8284333b63d1fa9f2 Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.510246 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.813816 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f4b565868-4nbfx"] Nov 25 20:44:16 crc kubenswrapper[4983]: W1125 20:44:16.875172 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc504575_1f16_42bd_bc6d_b4a9f16bc15c.slice/crio-504dd4d7a2db8920f7fb291befbc0b81912111f12abf94dbceee81a53c7a7d34 WatchSource:0}: Error finding container 504dd4d7a2db8920f7fb291befbc0b81912111f12abf94dbceee81a53c7a7d34: Status 404 returned error can't find the container with id 504dd4d7a2db8920f7fb291befbc0b81912111f12abf94dbceee81a53c7a7d34 Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.934416 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-778b5c8885-ww4ht"] Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.936905 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.939508 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.941618 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 20:44:16 crc kubenswrapper[4983]: I1125 20:44:16.949462 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-778b5c8885-ww4ht"] Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.005817 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746b6775bd-26zqf" event={"ID":"1ac04518-4a47-43b3-8e9f-84e8f3a80648","Type":"ContainerStarted","Data":"258950fcf68cd7c9df940549ae0451b1b9f70f389d65405e3ab17da233c2b00c"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.009229 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjjss" event={"ID":"fb577055-b6b9-4559-9f67-2253439acfc7","Type":"ContainerStarted","Data":"63d82d90e6c2828de264e253cfaa46441b2d82717124c306419de808bca627f0"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.009312 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjjss" event={"ID":"fb577055-b6b9-4559-9f67-2253439acfc7","Type":"ContainerStarted","Data":"f0ff05e7584104e3cfa989d815351046e86c065dae8f6b564dd72cdb87acb089"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.010757 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4b565868-4nbfx" event={"ID":"fc504575-1f16-42bd-bc6d-b4a9f16bc15c","Type":"ContainerStarted","Data":"504dd4d7a2db8920f7fb291befbc0b81912111f12abf94dbceee81a53c7a7d34"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.014418 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ac3ba27-a414-4c7c-b0c5-5d728781ec91","Type":"ContainerStarted","Data":"636e4701646f72fcae36d4c5d67dc6f10edfe31e6573ceb50b04e24b7ca6dc83"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.020537 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerStarted","Data":"3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031220 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/14a0dfa8-a664-45d8-bb1d-731f807b1427-kube-api-access-7fs7x\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031304 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-ovndb-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031327 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-public-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031372 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-config\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031402 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-combined-ca-bundle\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031444 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-internal-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.031462 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-httpd-config\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.035997 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vq68b" event={"ID":"a6eb357f-e3ba-4631-951c-65760c2c707d","Type":"ContainerStarted","Data":"0b21eb4d67ba6bcd862bbc3fe962e33fc24484160ab9cd33ac78af8ad35f819c"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.048018 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" event={"ID":"850af343-20d4-4033-9414-c22c6a180ffa","Type":"ContainerStarted","Data":"43f95792276c5d7a744a43570a6d7fdc6afa8bd7d603bae8284333b63d1fa9f2"} Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.055577 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hjjss" podStartSLOduration=25.055539921 podStartE2EDuration="25.055539921s" podCreationTimestamp="2025-11-25 20:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:17.030055817 +0000 UTC m=+1038.142589209" watchObservedRunningTime="2025-11-25 20:44:17.055539921 +0000 UTC m=+1038.168073313" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.124329 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vq68b" podStartSLOduration=6.896708564 podStartE2EDuration="41.124305792s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="2025-11-25 20:43:40.928243373 +0000 UTC m=+1002.040776765" lastFinishedPulling="2025-11-25 20:44:15.155840591 +0000 UTC m=+1036.268373993" observedRunningTime="2025-11-25 20:44:17.054808682 +0000 UTC m=+1038.167342074" watchObservedRunningTime="2025-11-25 20:44:17.124305792 +0000 UTC m=+1038.236839184" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135413 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-ovndb-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135476 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-public-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135577 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-config\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135632 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-combined-ca-bundle\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135708 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-internal-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135742 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-httpd-config\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.135813 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/14a0dfa8-a664-45d8-bb1d-731f807b1427-kube-api-access-7fs7x\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.155744 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-internal-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.159098 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-ovndb-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.160267 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-public-tls-certs\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.161531 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fs7x\" (UniqueName: \"kubernetes.io/projected/14a0dfa8-a664-45d8-bb1d-731f807b1427-kube-api-access-7fs7x\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.165457 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-config\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.172400 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-combined-ca-bundle\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.174238 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/14a0dfa8-a664-45d8-bb1d-731f807b1427-httpd-config\") pod \"neutron-778b5c8885-ww4ht\" (UID: \"14a0dfa8-a664-45d8-bb1d-731f807b1427\") " pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.265711 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.278064 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.649653 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb63554-a61c-478b-b626-83d825a75016" path="/var/lib/kubelet/pods/9eb63554-a61c-478b-b626-83d825a75016/volumes" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.650874 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" path="/var/lib/kubelet/pods/f58c94e7-0496-482c-aa85-079d35d0bd31/volumes" Nov 25 20:44:17 crc kubenswrapper[4983]: I1125 20:44:17.967003 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-778b5c8885-ww4ht"] Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.069195 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-778b5c8885-ww4ht" event={"ID":"14a0dfa8-a664-45d8-bb1d-731f807b1427","Type":"ContainerStarted","Data":"e40bfb52895c5236c1bf06badd9bddadbe342b68e96791d290d65d8a6f81c631"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.074827 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4b565868-4nbfx" event={"ID":"fc504575-1f16-42bd-bc6d-b4a9f16bc15c","Type":"ContainerStarted","Data":"ffc4d455db9d3451305faa8382d3138c5bcc09749801fa2664f8a6e99053562b"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.074863 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4b565868-4nbfx" event={"ID":"fc504575-1f16-42bd-bc6d-b4a9f16bc15c","Type":"ContainerStarted","Data":"e826521c79b32648163fd2fd07947722d89b658ebeb52d8656d5304037b3459c"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.076064 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.082463 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc01c1b2-f944-418e-93e6-3022566892b5","Type":"ContainerStarted","Data":"71ea98cddc95945425b074e17de06518381dca488dba5a921550dc531bf6a708"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.092056 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ac3ba27-a414-4c7c-b0c5-5d728781ec91","Type":"ContainerStarted","Data":"5174a126032da749d36001d8aec55c44cdc275096d8acf2fc759afd0a2a5f9de"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.098536 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746b6775bd-26zqf" event={"ID":"1ac04518-4a47-43b3-8e9f-84e8f3a80648","Type":"ContainerStarted","Data":"9598658aedba74555877ee6f6068a0ccf9b04456d13ea1fee47bfe7f3e7437f7"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.105960 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f4b565868-4nbfx" podStartSLOduration=4.105935143 podStartE2EDuration="4.105935143s" podCreationTimestamp="2025-11-25 20:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:18.100400436 +0000 UTC m=+1039.212933828" watchObservedRunningTime="2025-11-25 20:44:18.105935143 +0000 UTC m=+1039.218468535" Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.114793 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d7c8cfb-s259l" event={"ID":"ed474a92-4901-4ded-89c1-736427d72c92","Type":"ContainerStarted","Data":"37163508ad7a4fb394682206e64752d36a50aeed6461ec9027552f9f6877a086"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.114930 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f9d7c8cfb-s259l" event={"ID":"ed474a92-4901-4ded-89c1-736427d72c92","Type":"ContainerStarted","Data":"494c4575800297105a5bfc710f76768f7659b8a886952c9d48d6cf6ca33912e5"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.126599 4983 generic.go:334] "Generic (PLEG): container finished" podID="850af343-20d4-4033-9414-c22c6a180ffa" containerID="bbef66633a09a8602bb6cde9b9aa8414c7af7155b6f1159e794f3d3631d7b200" exitCode=0 Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.126783 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" event={"ID":"850af343-20d4-4033-9414-c22c6a180ffa","Type":"ContainerDied","Data":"bbef66633a09a8602bb6cde9b9aa8414c7af7155b6f1159e794f3d3631d7b200"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.126826 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" event={"ID":"850af343-20d4-4033-9414-c22c6a180ffa","Type":"ContainerStarted","Data":"0a7efe5fad316c68333788e31ea6ec4d8c2d6414a4552d2f357f0530e7d9478a"} Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.127964 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.137300 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-746b6775bd-26zqf" podStartSLOduration=3.197513835 podStartE2EDuration="33.137270582s" podCreationTimestamp="2025-11-25 20:43:45 +0000 UTC" firstStartedPulling="2025-11-25 20:43:46.412917814 +0000 UTC m=+1007.525451206" lastFinishedPulling="2025-11-25 20:44:16.352674561 +0000 UTC m=+1037.465207953" observedRunningTime="2025-11-25 20:44:18.133956915 +0000 UTC m=+1039.246490337" watchObservedRunningTime="2025-11-25 20:44:18.137270582 +0000 UTC m=+1039.249803974" Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.171983 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f9d7c8cfb-s259l" podStartSLOduration=-9223372003.68282 podStartE2EDuration="33.171957031s" podCreationTimestamp="2025-11-25 20:43:45 +0000 UTC" firstStartedPulling="2025-11-25 20:43:46.51777022 +0000 UTC m=+1007.630303612" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:18.155628058 +0000 UTC m=+1039.268161460" watchObservedRunningTime="2025-11-25 20:44:18.171957031 +0000 UTC m=+1039.284490413" Nov 25 20:44:18 crc kubenswrapper[4983]: I1125 20:44:18.183989 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" podStartSLOduration=4.183965239 podStartE2EDuration="4.183965239s" podCreationTimestamp="2025-11-25 20:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:18.177805676 +0000 UTC m=+1039.290339078" watchObservedRunningTime="2025-11-25 20:44:18.183965239 +0000 UTC m=+1039.296498631" Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.159301 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-778b5c8885-ww4ht" event={"ID":"14a0dfa8-a664-45d8-bb1d-731f807b1427","Type":"ContainerStarted","Data":"2a8efc130342b19639eaab2d03f72333ed1c1e83983b7905d7eaf412becf841c"} Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.160052 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-778b5c8885-ww4ht" event={"ID":"14a0dfa8-a664-45d8-bb1d-731f807b1427","Type":"ContainerStarted","Data":"dfe9a45612cbaaa68049aed06118e70df9d5a3a05850efd5f62a1fbe43f5c420"} Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.160870 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.173976 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc01c1b2-f944-418e-93e6-3022566892b5","Type":"ContainerStarted","Data":"8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61"} Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.177769 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ac3ba27-a414-4c7c-b0c5-5d728781ec91","Type":"ContainerStarted","Data":"f50346e4e65d94d575c5457244b88459a1177672fd89d5a5b3b3538898b2c7b1"} Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.191686 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-778b5c8885-ww4ht" podStartSLOduration=3.19165976 podStartE2EDuration="3.19165976s" podCreationTimestamp="2025-11-25 20:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:19.187902881 +0000 UTC m=+1040.300436273" watchObservedRunningTime="2025-11-25 20:44:19.19165976 +0000 UTC m=+1040.304193152" Nov 25 20:44:19 crc kubenswrapper[4983]: I1125 20:44:19.221650 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=27.221630624 podStartE2EDuration="27.221630624s" podCreationTimestamp="2025-11-25 20:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:19.219860397 +0000 UTC m=+1040.332393789" watchObservedRunningTime="2025-11-25 20:44:19.221630624 +0000 UTC m=+1040.334164016" Nov 25 20:44:20 crc kubenswrapper[4983]: I1125 20:44:20.216098 4983 generic.go:334] "Generic (PLEG): container finished" podID="a6eb357f-e3ba-4631-951c-65760c2c707d" containerID="0b21eb4d67ba6bcd862bbc3fe962e33fc24484160ab9cd33ac78af8ad35f819c" exitCode=0 Nov 25 20:44:20 crc kubenswrapper[4983]: I1125 20:44:20.217657 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vq68b" event={"ID":"a6eb357f-e3ba-4631-951c-65760c2c707d","Type":"ContainerDied","Data":"0b21eb4d67ba6bcd862bbc3fe962e33fc24484160ab9cd33ac78af8ad35f819c"} Nov 25 20:44:20 crc kubenswrapper[4983]: I1125 20:44:20.225047 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc01c1b2-f944-418e-93e6-3022566892b5","Type":"ContainerStarted","Data":"4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a"} Nov 25 20:44:20 crc kubenswrapper[4983]: I1125 20:44:20.252883 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5m84b" podUID="f58c94e7-0496-482c-aa85-079d35d0bd31" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Nov 25 20:44:20 crc kubenswrapper[4983]: I1125 20:44:20.271678 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.271654187 podStartE2EDuration="5.271654187s" podCreationTimestamp="2025-11-25 20:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:20.263252934 +0000 UTC m=+1041.375786326" watchObservedRunningTime="2025-11-25 20:44:20.271654187 +0000 UTC m=+1041.384187579" Nov 25 20:44:21 crc kubenswrapper[4983]: I1125 20:44:21.234738 4983 generic.go:334] "Generic (PLEG): container finished" podID="fb577055-b6b9-4559-9f67-2253439acfc7" containerID="63d82d90e6c2828de264e253cfaa46441b2d82717124c306419de808bca627f0" exitCode=0 Nov 25 20:44:21 crc kubenswrapper[4983]: I1125 20:44:21.234832 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjjss" event={"ID":"fb577055-b6b9-4559-9f67-2253439acfc7","Type":"ContainerDied","Data":"63d82d90e6c2828de264e253cfaa46441b2d82717124c306419de808bca627f0"} Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.250615 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vq68b" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.257098 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.264265 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjjss" event={"ID":"fb577055-b6b9-4559-9f67-2253439acfc7","Type":"ContainerDied","Data":"f0ff05e7584104e3cfa989d815351046e86c065dae8f6b564dd72cdb87acb089"} Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.264330 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ff05e7584104e3cfa989d815351046e86c065dae8f6b564dd72cdb87acb089" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.264282 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjjss" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.266132 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vq68b" event={"ID":"a6eb357f-e3ba-4631-951c-65760c2c707d","Type":"ContainerDied","Data":"b1d28aa54ca8ff0f59bb5c5ebf622fc44dcdf2cfd61f8c25340672f4015ad8ce"} Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.266193 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d28aa54ca8ff0f59bb5c5ebf622fc44dcdf2cfd61f8c25340672f4015ad8ce" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.266164 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vq68b" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307396 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-fernet-keys\") pod \"fb577055-b6b9-4559-9f67-2253439acfc7\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307469 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-config-data\") pod \"a6eb357f-e3ba-4631-951c-65760c2c707d\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307547 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jj2\" (UniqueName: \"kubernetes.io/projected/a6eb357f-e3ba-4631-951c-65760c2c707d-kube-api-access-98jj2\") pod \"a6eb357f-e3ba-4631-951c-65760c2c707d\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307605 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-credential-keys\") pod \"fb577055-b6b9-4559-9f67-2253439acfc7\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307633 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9gss\" (UniqueName: \"kubernetes.io/projected/fb577055-b6b9-4559-9f67-2253439acfc7-kube-api-access-w9gss\") pod \"fb577055-b6b9-4559-9f67-2253439acfc7\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307759 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-combined-ca-bundle\") pod \"fb577055-b6b9-4559-9f67-2253439acfc7\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307850 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-combined-ca-bundle\") pod \"a6eb357f-e3ba-4631-951c-65760c2c707d\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.307912 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-config-data\") pod \"fb577055-b6b9-4559-9f67-2253439acfc7\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.308029 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-scripts\") pod \"a6eb357f-e3ba-4631-951c-65760c2c707d\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.308082 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6eb357f-e3ba-4631-951c-65760c2c707d-logs\") pod \"a6eb357f-e3ba-4631-951c-65760c2c707d\" (UID: \"a6eb357f-e3ba-4631-951c-65760c2c707d\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.308105 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-scripts\") pod \"fb577055-b6b9-4559-9f67-2253439acfc7\" (UID: \"fb577055-b6b9-4559-9f67-2253439acfc7\") " Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.313374 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb577055-b6b9-4559-9f67-2253439acfc7" (UID: "fb577055-b6b9-4559-9f67-2253439acfc7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.316766 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6eb357f-e3ba-4631-951c-65760c2c707d-logs" (OuterVolumeSpecName: "logs") pod "a6eb357f-e3ba-4631-951c-65760c2c707d" (UID: "a6eb357f-e3ba-4631-951c-65760c2c707d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.320416 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-scripts" (OuterVolumeSpecName: "scripts") pod "fb577055-b6b9-4559-9f67-2253439acfc7" (UID: "fb577055-b6b9-4559-9f67-2253439acfc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.330803 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.333938 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.333970 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.333987 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.332491 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6eb357f-e3ba-4631-951c-65760c2c707d-kube-api-access-98jj2" (OuterVolumeSpecName: "kube-api-access-98jj2") pod "a6eb357f-e3ba-4631-951c-65760c2c707d" (UID: "a6eb357f-e3ba-4631-951c-65760c2c707d"). InnerVolumeSpecName "kube-api-access-98jj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.335222 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-scripts" (OuterVolumeSpecName: "scripts") pod "a6eb357f-e3ba-4631-951c-65760c2c707d" (UID: "a6eb357f-e3ba-4631-951c-65760c2c707d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.339733 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb577055-b6b9-4559-9f67-2253439acfc7-kube-api-access-w9gss" (OuterVolumeSpecName: "kube-api-access-w9gss") pod "fb577055-b6b9-4559-9f67-2253439acfc7" (UID: "fb577055-b6b9-4559-9f67-2253439acfc7"). InnerVolumeSpecName "kube-api-access-w9gss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.340724 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb577055-b6b9-4559-9f67-2253439acfc7" (UID: "fb577055-b6b9-4559-9f67-2253439acfc7"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.370981 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.375051 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-config-data" (OuterVolumeSpecName: "config-data") pod "a6eb357f-e3ba-4631-951c-65760c2c707d" (UID: "a6eb357f-e3ba-4631-951c-65760c2c707d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.385230 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb577055-b6b9-4559-9f67-2253439acfc7" (UID: "fb577055-b6b9-4559-9f67-2253439acfc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410605 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410648 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410665 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6eb357f-e3ba-4631-951c-65760c2c707d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410674 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410682 4983 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410692 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410701 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jj2\" (UniqueName: \"kubernetes.io/projected/a6eb357f-e3ba-4631-951c-65760c2c707d-kube-api-access-98jj2\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410709 4983 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.410717 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9gss\" (UniqueName: \"kubernetes.io/projected/fb577055-b6b9-4559-9f67-2253439acfc7-kube-api-access-w9gss\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.420170 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-config-data" (OuterVolumeSpecName: "config-data") pod "fb577055-b6b9-4559-9f67-2253439acfc7" (UID: "fb577055-b6b9-4559-9f67-2253439acfc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.431710 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6eb357f-e3ba-4631-951c-65760c2c707d" (UID: "a6eb357f-e3ba-4631-951c-65760c2c707d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.440259 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.513133 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6eb357f-e3ba-4631-951c-65760c2c707d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:23 crc kubenswrapper[4983]: I1125 20:44:23.513180 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb577055-b6b9-4559-9f67-2253439acfc7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.282285 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerStarted","Data":"3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908"} Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.495721 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7b58ff8778-fz55h"] Nov 25 20:44:24 crc kubenswrapper[4983]: E1125 20:44:24.496305 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb577055-b6b9-4559-9f67-2253439acfc7" containerName="keystone-bootstrap" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.496324 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb577055-b6b9-4559-9f67-2253439acfc7" containerName="keystone-bootstrap" Nov 25 20:44:24 crc kubenswrapper[4983]: E1125 20:44:24.496343 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6eb357f-e3ba-4631-951c-65760c2c707d" containerName="placement-db-sync" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.496352 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6eb357f-e3ba-4631-951c-65760c2c707d" containerName="placement-db-sync" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.496570 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb577055-b6b9-4559-9f67-2253439acfc7" containerName="keystone-bootstrap" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.496594 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6eb357f-e3ba-4631-951c-65760c2c707d" containerName="placement-db-sync" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.497316 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.506107 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.506143 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.506213 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4pfh" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.506482 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.506629 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.506895 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.515418 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ffcbfd47b-hljtd"] Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.518821 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.526726 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-29652" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.527120 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.527292 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.527788 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.527993 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.546848 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ffcbfd47b-hljtd"] Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.547727 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.558580 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b58ff8778-fz55h"] Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.636705 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kgctb"] Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.637121 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" containerName="dnsmasq-dns" containerID="cri-o://14009cf7136ae8752d942d54dc30f5b24e74800affa9dc49231548fa0eca6ca6" gracePeriod=10 Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639774 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-scripts\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639816 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-combined-ca-bundle\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639842 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-combined-ca-bundle\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639868 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-public-tls-certs\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639900 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjxf6\" (UniqueName: \"kubernetes.io/projected/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-kube-api-access-bjxf6\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639920 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-internal-tls-certs\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639938 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmx5\" (UniqueName: \"kubernetes.io/projected/545d9a00-2ce8-463f-b16c-6b7c0ac426be-kube-api-access-xsmx5\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639962 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-config-data\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.639983 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/545d9a00-2ce8-463f-b16c-6b7c0ac426be-logs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.640006 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-internal-tls-certs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.640027 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-credential-keys\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.640052 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-fernet-keys\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.640095 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-scripts\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.640126 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-public-tls-certs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.640143 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-config-data\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.741715 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-public-tls-certs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.741803 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-config-data\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.741935 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-scripts\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.741984 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-combined-ca-bundle\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742004 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-combined-ca-bundle\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742039 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-public-tls-certs\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742081 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjxf6\" (UniqueName: \"kubernetes.io/projected/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-kube-api-access-bjxf6\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742130 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-internal-tls-certs\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742145 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmx5\" (UniqueName: \"kubernetes.io/projected/545d9a00-2ce8-463f-b16c-6b7c0ac426be-kube-api-access-xsmx5\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742173 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-config-data\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742189 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/545d9a00-2ce8-463f-b16c-6b7c0ac426be-logs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742227 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-internal-tls-certs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742259 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-credential-keys\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742276 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-fernet-keys\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.742325 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-scripts\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.750763 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-config-data\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.751231 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-scripts\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.751429 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/545d9a00-2ce8-463f-b16c-6b7c0ac426be-logs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.752085 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-public-tls-certs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.758046 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-public-tls-certs\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.761973 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-combined-ca-bundle\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.763132 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-config-data\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.765190 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-combined-ca-bundle\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.765268 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-credential-keys\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.770069 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-internal-tls-certs\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.777931 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-scripts\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.778127 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/545d9a00-2ce8-463f-b16c-6b7c0ac426be-internal-tls-certs\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.780842 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-fernet-keys\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.782540 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmx5\" (UniqueName: \"kubernetes.io/projected/545d9a00-2ce8-463f-b16c-6b7c0ac426be-kube-api-access-xsmx5\") pod \"placement-7ffcbfd47b-hljtd\" (UID: \"545d9a00-2ce8-463f-b16c-6b7c0ac426be\") " pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.797360 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjxf6\" (UniqueName: \"kubernetes.io/projected/9aca410d-c0fd-4ba7-81c0-434416f8dfbd-kube-api-access-bjxf6\") pod \"keystone-7b58ff8778-fz55h\" (UID: \"9aca410d-c0fd-4ba7-81c0-434416f8dfbd\") " pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.815036 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:24 crc kubenswrapper[4983]: I1125 20:44:24.842647 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.329172 4983 generic.go:334] "Generic (PLEG): container finished" podID="81a52229-9987-46fe-b40b-a1951d6d0396" containerID="14009cf7136ae8752d942d54dc30f5b24e74800affa9dc49231548fa0eca6ca6" exitCode=0 Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.331365 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" event={"ID":"81a52229-9987-46fe-b40b-a1951d6d0396","Type":"ContainerDied","Data":"14009cf7136ae8752d942d54dc30f5b24e74800affa9dc49231548fa0eca6ca6"} Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.331396 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" event={"ID":"81a52229-9987-46fe-b40b-a1951d6d0396","Type":"ContainerDied","Data":"e6f8d81623a8095ef881d5ac49d21bf8e1d3f9ac831c29c872ff622fb01513fd"} Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.331408 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f8d81623a8095ef881d5ac49d21bf8e1d3f9ac831c29c872ff622fb01513fd" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.450187 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ffcbfd47b-hljtd"] Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.479246 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.540815 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7b58ff8778-fz55h"] Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.569211 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-config\") pod \"81a52229-9987-46fe-b40b-a1951d6d0396\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.569499 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsj5r\" (UniqueName: \"kubernetes.io/projected/81a52229-9987-46fe-b40b-a1951d6d0396-kube-api-access-dsj5r\") pod \"81a52229-9987-46fe-b40b-a1951d6d0396\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.569566 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-svc\") pod \"81a52229-9987-46fe-b40b-a1951d6d0396\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.569611 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-sb\") pod \"81a52229-9987-46fe-b40b-a1951d6d0396\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.569711 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-nb\") pod \"81a52229-9987-46fe-b40b-a1951d6d0396\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.569782 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-swift-storage-0\") pod \"81a52229-9987-46fe-b40b-a1951d6d0396\" (UID: \"81a52229-9987-46fe-b40b-a1951d6d0396\") " Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.600012 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a52229-9987-46fe-b40b-a1951d6d0396-kube-api-access-dsj5r" (OuterVolumeSpecName: "kube-api-access-dsj5r") pod "81a52229-9987-46fe-b40b-a1951d6d0396" (UID: "81a52229-9987-46fe-b40b-a1951d6d0396"). InnerVolumeSpecName "kube-api-access-dsj5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.657588 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.657984 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.660490 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.660908 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.673976 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsj5r\" (UniqueName: \"kubernetes.io/projected/81a52229-9987-46fe-b40b-a1951d6d0396-kube-api-access-dsj5r\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.749076 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-config" (OuterVolumeSpecName: "config") pod "81a52229-9987-46fe-b40b-a1951d6d0396" (UID: "81a52229-9987-46fe-b40b-a1951d6d0396"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.764583 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81a52229-9987-46fe-b40b-a1951d6d0396" (UID: "81a52229-9987-46fe-b40b-a1951d6d0396"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.777459 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.777512 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.782294 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81a52229-9987-46fe-b40b-a1951d6d0396" (UID: "81a52229-9987-46fe-b40b-a1951d6d0396"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.782376 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81a52229-9987-46fe-b40b-a1951d6d0396" (UID: "81a52229-9987-46fe-b40b-a1951d6d0396"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.797720 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81a52229-9987-46fe-b40b-a1951d6d0396" (UID: "81a52229-9987-46fe-b40b-a1951d6d0396"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.879078 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.879110 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:25 crc kubenswrapper[4983]: I1125 20:44:25.879122 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81a52229-9987-46fe-b40b-a1951d6d0396-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.356335 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b58ff8778-fz55h" event={"ID":"9aca410d-c0fd-4ba7-81c0-434416f8dfbd","Type":"ContainerStarted","Data":"cdf5934282f8c185be21a4b8b9e208a58f932ea55be144dccb5545dba5c30d83"} Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.356836 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7b58ff8778-fz55h" event={"ID":"9aca410d-c0fd-4ba7-81c0-434416f8dfbd","Type":"ContainerStarted","Data":"f1e4e75477e715266286439e14bd3ae6abf4b370232341e5ee0a445dbc562398"} Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.357015 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.376146 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h4hwg" event={"ID":"24119f4e-9bb9-4f12-a031-03ec811465d1","Type":"ContainerStarted","Data":"aca4846fb6d5c4f7566e11c39b267a4c806dc7a77e95b4f2fdb3566caef3d71f"} Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.394663 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7b58ff8778-fz55h" podStartSLOduration=2.394614798 podStartE2EDuration="2.394614798s" podCreationTimestamp="2025-11-25 20:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:26.374954158 +0000 UTC m=+1047.487487540" watchObservedRunningTime="2025-11-25 20:44:26.394614798 +0000 UTC m=+1047.507148190" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.400012 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffcbfd47b-hljtd" event={"ID":"545d9a00-2ce8-463f-b16c-6b7c0ac426be","Type":"ContainerStarted","Data":"706f6b6e0b658d361ba3f92ac780d3021e7cefb22dadbf69d55b6b475d6eb117"} Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.400096 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffcbfd47b-hljtd" event={"ID":"545d9a00-2ce8-463f-b16c-6b7c0ac426be","Type":"ContainerStarted","Data":"1f16daee26b4561014908f68d1f71bcef2312b1cadd349e7303e56aad27f5d7b"} Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.400113 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ffcbfd47b-hljtd" event={"ID":"545d9a00-2ce8-463f-b16c-6b7c0ac426be","Type":"ContainerStarted","Data":"c1486bec16c165a6743ed4a004fd50697f30693287b853857bf0d445a2865406"} Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.400179 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-kgctb" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.405600 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.406416 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.408265 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h4hwg" podStartSLOduration=6.06189455 podStartE2EDuration="50.408237089s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="2025-11-25 20:43:40.914202111 +0000 UTC m=+1002.026735523" lastFinishedPulling="2025-11-25 20:44:25.26054467 +0000 UTC m=+1046.373078062" observedRunningTime="2025-11-25 20:44:26.392998565 +0000 UTC m=+1047.505531967" watchObservedRunningTime="2025-11-25 20:44:26.408237089 +0000 UTC m=+1047.520770481" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.424805 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ffcbfd47b-hljtd" podStartSLOduration=2.424779947 podStartE2EDuration="2.424779947s" podCreationTimestamp="2025-11-25 20:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:26.420060092 +0000 UTC m=+1047.532593484" watchObservedRunningTime="2025-11-25 20:44:26.424779947 +0000 UTC m=+1047.537313339" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.435108 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.435170 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.453108 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kgctb"] Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.478284 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-kgctb"] Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.483957 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.492717 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.834083 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.834221 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:44:26 crc kubenswrapper[4983]: I1125 20:44:26.836792 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 20:44:27 crc kubenswrapper[4983]: I1125 20:44:27.406857 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:44:27 crc kubenswrapper[4983]: I1125 20:44:27.407254 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:44:27 crc kubenswrapper[4983]: I1125 20:44:27.613893 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" path="/var/lib/kubelet/pods/81a52229-9987-46fe-b40b-a1951d6d0396/volumes" Nov 25 20:44:29 crc kubenswrapper[4983]: I1125 20:44:29.821412 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 20:44:29 crc kubenswrapper[4983]: I1125 20:44:29.822226 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:44:29 crc kubenswrapper[4983]: I1125 20:44:29.952691 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 20:44:30 crc kubenswrapper[4983]: I1125 20:44:30.456474 4983 generic.go:334] "Generic (PLEG): container finished" podID="24119f4e-9bb9-4f12-a031-03ec811465d1" containerID="aca4846fb6d5c4f7566e11c39b267a4c806dc7a77e95b4f2fdb3566caef3d71f" exitCode=0 Nov 25 20:44:30 crc kubenswrapper[4983]: I1125 20:44:30.456840 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h4hwg" event={"ID":"24119f4e-9bb9-4f12-a031-03ec811465d1","Type":"ContainerDied","Data":"aca4846fb6d5c4f7566e11c39b267a4c806dc7a77e95b4f2fdb3566caef3d71f"} Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.106977 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.248634 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-combined-ca-bundle\") pod \"24119f4e-9bb9-4f12-a031-03ec811465d1\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.248707 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-db-sync-config-data\") pod \"24119f4e-9bb9-4f12-a031-03ec811465d1\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.248786 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8fn\" (UniqueName: \"kubernetes.io/projected/24119f4e-9bb9-4f12-a031-03ec811465d1-kube-api-access-rh8fn\") pod \"24119f4e-9bb9-4f12-a031-03ec811465d1\" (UID: \"24119f4e-9bb9-4f12-a031-03ec811465d1\") " Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.257704 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "24119f4e-9bb9-4f12-a031-03ec811465d1" (UID: "24119f4e-9bb9-4f12-a031-03ec811465d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.258517 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24119f4e-9bb9-4f12-a031-03ec811465d1-kube-api-access-rh8fn" (OuterVolumeSpecName: "kube-api-access-rh8fn") pod "24119f4e-9bb9-4f12-a031-03ec811465d1" (UID: "24119f4e-9bb9-4f12-a031-03ec811465d1"). InnerVolumeSpecName "kube-api-access-rh8fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.279953 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24119f4e-9bb9-4f12-a031-03ec811465d1" (UID: "24119f4e-9bb9-4f12-a031-03ec811465d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.353142 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.353190 4983 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24119f4e-9bb9-4f12-a031-03ec811465d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.353229 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8fn\" (UniqueName: \"kubernetes.io/projected/24119f4e-9bb9-4f12-a031-03ec811465d1-kube-api-access-rh8fn\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.488540 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h4hwg" event={"ID":"24119f4e-9bb9-4f12-a031-03ec811465d1","Type":"ContainerDied","Data":"803d6b7a851b491fdb40d83299442b5c369ac06e5fde38dc9e48b111ae7932c7"} Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.488602 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="803d6b7a851b491fdb40d83299442b5c369ac06e5fde38dc9e48b111ae7932c7" Nov 25 20:44:33 crc kubenswrapper[4983]: I1125 20:44:33.488684 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h4hwg" Nov 25 20:44:33 crc kubenswrapper[4983]: E1125 20:44:33.817291 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="4fa168af-421e-4a45-8201-13eb69a20830" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.408377 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d94b4b49b-7bcmx"] Nov 25 20:44:34 crc kubenswrapper[4983]: E1125 20:44:34.408737 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" containerName="init" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.408750 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" containerName="init" Nov 25 20:44:34 crc kubenswrapper[4983]: E1125 20:44:34.408774 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24119f4e-9bb9-4f12-a031-03ec811465d1" containerName="barbican-db-sync" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.408780 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="24119f4e-9bb9-4f12-a031-03ec811465d1" containerName="barbican-db-sync" Nov 25 20:44:34 crc kubenswrapper[4983]: E1125 20:44:34.408796 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" containerName="dnsmasq-dns" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.408802 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" containerName="dnsmasq-dns" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.408962 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="24119f4e-9bb9-4f12-a031-03ec811465d1" containerName="barbican-db-sync" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.408979 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a52229-9987-46fe-b40b-a1951d6d0396" containerName="dnsmasq-dns" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.410126 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.415309 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t8tlz" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.416404 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.416549 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.439904 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d94b4b49b-7bcmx"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.455473 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-64d7f8cd7f-49776"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.470988 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.479687 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.492726 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64d7f8cd7f-49776"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.573428 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dskv" event={"ID":"cca9d2b3-2f79-4d38-8427-f5bfae9fc314","Type":"ContainerStarted","Data":"1335540bf91f4cafb344bc9dd59382aaadc51c67d9fc88c9019bd0cc3beceda5"} Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575621 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jsv5\" (UniqueName: \"kubernetes.io/projected/d50f667a-e040-4db9-83d1-a1f72b138332-kube-api-access-6jsv5\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575707 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-combined-ca-bundle\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575733 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50f667a-e040-4db9-83d1-a1f72b138332-logs\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575762 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-config-data\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575801 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-config-data\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575823 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-config-data-custom\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575875 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-config-data-custom\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575906 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65m4\" (UniqueName: \"kubernetes.io/projected/067348dd-7070-4616-871c-46a8ec91be00-kube-api-access-f65m4\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575939 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-combined-ca-bundle\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.575963 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067348dd-7070-4616-871c-46a8ec91be00-logs\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.580907 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerStarted","Data":"4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667"} Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.581170 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="ceilometer-notification-agent" containerID="cri-o://3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454" gracePeriod=30 Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.581669 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.581849 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="sg-core" containerID="cri-o://3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908" gracePeriod=30 Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.581986 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="proxy-httpd" containerID="cri-o://4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667" gracePeriod=30 Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.632375 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ncx96"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.638175 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.655616 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ncx96"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.662217 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7dskv" podStartSLOduration=2.727152303 podStartE2EDuration="58.662197475s" podCreationTimestamp="2025-11-25 20:43:36 +0000 UTC" firstStartedPulling="2025-11-25 20:43:37.653277459 +0000 UTC m=+998.765810851" lastFinishedPulling="2025-11-25 20:44:33.588322631 +0000 UTC m=+1054.700856023" observedRunningTime="2025-11-25 20:44:34.5970552 +0000 UTC m=+1055.709588592" watchObservedRunningTime="2025-11-25 20:44:34.662197475 +0000 UTC m=+1055.774730867" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.677445 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-combined-ca-bundle\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.677598 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067348dd-7070-4616-871c-46a8ec91be00-logs\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.677727 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jsv5\" (UniqueName: \"kubernetes.io/projected/d50f667a-e040-4db9-83d1-a1f72b138332-kube-api-access-6jsv5\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.677849 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-combined-ca-bundle\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.677941 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50f667a-e040-4db9-83d1-a1f72b138332-logs\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.678042 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-config-data\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.678143 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-config-data\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.678265 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-config-data-custom\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.678384 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-config-data-custom\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.678465 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f65m4\" (UniqueName: \"kubernetes.io/projected/067348dd-7070-4616-871c-46a8ec91be00-kube-api-access-f65m4\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.678985 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/067348dd-7070-4616-871c-46a8ec91be00-logs\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.681576 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d50f667a-e040-4db9-83d1-a1f72b138332-logs\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.692383 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-combined-ca-bundle\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.692451 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-combined-ca-bundle\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.693130 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-config-data\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.696125 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/067348dd-7070-4616-871c-46a8ec91be00-config-data-custom\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.696125 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-config-data\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.699745 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6b9ff8df78-mv2g6"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.702141 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.703756 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f65m4\" (UniqueName: \"kubernetes.io/projected/067348dd-7070-4616-871c-46a8ec91be00-kube-api-access-f65m4\") pod \"barbican-keystone-listener-6d94b4b49b-7bcmx\" (UID: \"067348dd-7070-4616-871c-46a8ec91be00\") " pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.705178 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.710285 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d50f667a-e040-4db9-83d1-a1f72b138332-config-data-custom\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.724659 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jsv5\" (UniqueName: \"kubernetes.io/projected/d50f667a-e040-4db9-83d1-a1f72b138332-kube-api-access-6jsv5\") pod \"barbican-worker-64d7f8cd7f-49776\" (UID: \"d50f667a-e040-4db9-83d1-a1f72b138332\") " pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.724843 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b9ff8df78-mv2g6"] Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.729912 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780420 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data-custom\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780471 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780583 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwq6\" (UniqueName: \"kubernetes.io/projected/1d2f7584-5f22-4f5a-a58f-8856c28e913c-kube-api-access-mbwq6\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780603 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780658 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2f7584-5f22-4f5a-a58f-8856c28e913c-logs\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780686 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780707 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-config\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780752 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780774 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcxrn\" (UniqueName: \"kubernetes.io/projected/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-kube-api-access-jcxrn\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.780806 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-combined-ca-bundle\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.835387 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-64d7f8cd7f-49776" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.882524 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-combined-ca-bundle\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883013 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data-custom\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883033 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883060 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883086 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwq6\" (UniqueName: \"kubernetes.io/projected/1d2f7584-5f22-4f5a-a58f-8856c28e913c-kube-api-access-mbwq6\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883106 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883143 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2f7584-5f22-4f5a-a58f-8856c28e913c-logs\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883171 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883194 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-config\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883247 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.883267 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcxrn\" (UniqueName: \"kubernetes.io/projected/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-kube-api-access-jcxrn\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.884419 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.886796 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2f7584-5f22-4f5a-a58f-8856c28e913c-logs\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.887304 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.887822 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-config\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.888871 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.890649 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.897176 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data-custom\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.898985 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.899469 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-combined-ca-bundle\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.901787 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcxrn\" (UniqueName: \"kubernetes.io/projected/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-kube-api-access-jcxrn\") pod \"dnsmasq-dns-85ff748b95-ncx96\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.919308 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwq6\" (UniqueName: \"kubernetes.io/projected/1d2f7584-5f22-4f5a-a58f-8856c28e913c-kube-api-access-mbwq6\") pod \"barbican-api-6b9ff8df78-mv2g6\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.960231 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:34 crc kubenswrapper[4983]: I1125 20:44:34.979555 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.033966 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d94b4b49b-7bcmx"] Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.199253 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-64d7f8cd7f-49776"] Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.596973 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64d7f8cd7f-49776" event={"ID":"d50f667a-e040-4db9-83d1-a1f72b138332","Type":"ContainerStarted","Data":"510c60768d3cfe215966282fbbaa57da96a6aac5bd94a9c60a4651a0ce099188"} Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.606171 4983 generic.go:334] "Generic (PLEG): container finished" podID="4fa168af-421e-4a45-8201-13eb69a20830" containerID="4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667" exitCode=0 Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.606195 4983 generic.go:334] "Generic (PLEG): container finished" podID="4fa168af-421e-4a45-8201-13eb69a20830" containerID="3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908" exitCode=2 Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.618150 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6b9ff8df78-mv2g6"] Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.618684 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerDied","Data":"4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667"} Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.618712 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerDied","Data":"3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908"} Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.618722 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" event={"ID":"067348dd-7070-4616-871c-46a8ec91be00","Type":"ContainerStarted","Data":"ff9e14223a604eb69ac8ad932b853f4706fce3a465ee77dfb5bbee5a188205c4"} Nov 25 20:44:35 crc kubenswrapper[4983]: W1125 20:44:35.628384 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d2f7584_5f22_4f5a_a58f_8856c28e913c.slice/crio-f14573187a45e69eb80e219be60a7612edac67bad38d38eed26f8819a02d95e6 WatchSource:0}: Error finding container f14573187a45e69eb80e219be60a7612edac67bad38d38eed26f8819a02d95e6: Status 404 returned error can't find the container with id f14573187a45e69eb80e219be60a7612edac67bad38d38eed26f8819a02d95e6 Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.660430 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.661903 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f9d7c8cfb-s259l" podUID="ed474a92-4901-4ded-89c1-736427d72c92" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Nov 25 20:44:35 crc kubenswrapper[4983]: I1125 20:44:35.700666 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ncx96"] Nov 25 20:44:35 crc kubenswrapper[4983]: W1125 20:44:35.705739 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8bbe420_76df_4aa7_a5ff_2ff482ba93b4.slice/crio-19fdb7627c002f0c879e1ee99ecabcaef1f7495e0be886d8d76cb5df96e002fd WatchSource:0}: Error finding container 19fdb7627c002f0c879e1ee99ecabcaef1f7495e0be886d8d76cb5df96e002fd: Status 404 returned error can't find the container with id 19fdb7627c002f0c879e1ee99ecabcaef1f7495e0be886d8d76cb5df96e002fd Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.617847 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9ff8df78-mv2g6" event={"ID":"1d2f7584-5f22-4f5a-a58f-8856c28e913c","Type":"ContainerStarted","Data":"cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c"} Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.618370 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.618385 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9ff8df78-mv2g6" event={"ID":"1d2f7584-5f22-4f5a-a58f-8856c28e913c","Type":"ContainerStarted","Data":"28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c"} Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.618396 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9ff8df78-mv2g6" event={"ID":"1d2f7584-5f22-4f5a-a58f-8856c28e913c","Type":"ContainerStarted","Data":"f14573187a45e69eb80e219be60a7612edac67bad38d38eed26f8819a02d95e6"} Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.618407 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.621172 4983 generic.go:334] "Generic (PLEG): container finished" podID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerID="8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10" exitCode=0 Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.621240 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" event={"ID":"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4","Type":"ContainerDied","Data":"8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10"} Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.621285 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" event={"ID":"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4","Type":"ContainerStarted","Data":"19fdb7627c002f0c879e1ee99ecabcaef1f7495e0be886d8d76cb5df96e002fd"} Nov 25 20:44:36 crc kubenswrapper[4983]: I1125 20:44:36.679613 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6b9ff8df78-mv2g6" podStartSLOduration=2.6795843599999998 podStartE2EDuration="2.67958436s" podCreationTimestamp="2025-11-25 20:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:36.658182133 +0000 UTC m=+1057.770715515" watchObservedRunningTime="2025-11-25 20:44:36.67958436 +0000 UTC m=+1057.792117752" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.641047 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b69f99f98-bmrwt"] Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.644347 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b69f99f98-bmrwt"] Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.644471 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.650537 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.650976 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.737725 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-public-tls-certs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.738126 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nq4h\" (UniqueName: \"kubernetes.io/projected/d239c72e-850f-45f1-9f9f-568c2bee1546-kube-api-access-4nq4h\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.738196 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-config-data\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.738235 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-combined-ca-bundle\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.738268 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d239c72e-850f-45f1-9f9f-568c2bee1546-logs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.738313 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-internal-tls-certs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.738446 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-config-data-custom\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840529 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-config-data\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840593 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-combined-ca-bundle\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840617 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d239c72e-850f-45f1-9f9f-568c2bee1546-logs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840671 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-internal-tls-certs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840729 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-config-data-custom\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840791 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-public-tls-certs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.840811 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nq4h\" (UniqueName: \"kubernetes.io/projected/d239c72e-850f-45f1-9f9f-568c2bee1546-kube-api-access-4nq4h\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.841334 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d239c72e-850f-45f1-9f9f-568c2bee1546-logs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.845049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-combined-ca-bundle\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.845779 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-config-data\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.846248 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-internal-tls-certs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.857983 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-public-tls-certs\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.858706 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nq4h\" (UniqueName: \"kubernetes.io/projected/d239c72e-850f-45f1-9f9f-568c2bee1546-kube-api-access-4nq4h\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.865141 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d239c72e-850f-45f1-9f9f-568c2bee1546-config-data-custom\") pod \"barbican-api-5b69f99f98-bmrwt\" (UID: \"d239c72e-850f-45f1-9f9f-568c2bee1546\") " pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:37 crc kubenswrapper[4983]: I1125 20:44:37.961585 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.514004 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b69f99f98-bmrwt"] Nov 25 20:44:38 crc kubenswrapper[4983]: W1125 20:44:38.520864 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd239c72e_850f_45f1_9f9f_568c2bee1546.slice/crio-4a2cedfa3fa2289df7bbd887383b140e05034501197bf2c9b0ac6c7e15f74ac1 WatchSource:0}: Error finding container 4a2cedfa3fa2289df7bbd887383b140e05034501197bf2c9b0ac6c7e15f74ac1: Status 404 returned error can't find the container with id 4a2cedfa3fa2289df7bbd887383b140e05034501197bf2c9b0ac6c7e15f74ac1 Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.661242 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64d7f8cd7f-49776" event={"ID":"d50f667a-e040-4db9-83d1-a1f72b138332","Type":"ContainerStarted","Data":"e118e17c423a514693772e551c53042fff24ba7f139d2d42fa53a66214b85be1"} Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.661291 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-64d7f8cd7f-49776" event={"ID":"d50f667a-e040-4db9-83d1-a1f72b138332","Type":"ContainerStarted","Data":"4d3a6439a1b0372d69c216f268e49c1fc7e0fc9173804b1e5ca7f363165cc1d9"} Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.663009 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b69f99f98-bmrwt" event={"ID":"d239c72e-850f-45f1-9f9f-568c2bee1546","Type":"ContainerStarted","Data":"4a2cedfa3fa2289df7bbd887383b140e05034501197bf2c9b0ac6c7e15f74ac1"} Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.665286 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" event={"ID":"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4","Type":"ContainerStarted","Data":"812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628"} Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.665457 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.667758 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" event={"ID":"067348dd-7070-4616-871c-46a8ec91be00","Type":"ContainerStarted","Data":"ca9d3192489b8763b32164a261b94bbbed7e301a785ef5c051de2c05af8b7de3"} Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.667807 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" event={"ID":"067348dd-7070-4616-871c-46a8ec91be00","Type":"ContainerStarted","Data":"6cb75eea12feb6198f7ab2349328f36934606efb0cab4c54a4763cc20280df3e"} Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.684481 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-64d7f8cd7f-49776" podStartSLOduration=2.362584647 podStartE2EDuration="4.684464605s" podCreationTimestamp="2025-11-25 20:44:34 +0000 UTC" firstStartedPulling="2025-11-25 20:44:35.217129568 +0000 UTC m=+1056.329662960" lastFinishedPulling="2025-11-25 20:44:37.539009526 +0000 UTC m=+1058.651542918" observedRunningTime="2025-11-25 20:44:38.681890897 +0000 UTC m=+1059.794424289" watchObservedRunningTime="2025-11-25 20:44:38.684464605 +0000 UTC m=+1059.796997997" Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.709166 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" podStartSLOduration=4.709148559 podStartE2EDuration="4.709148559s" podCreationTimestamp="2025-11-25 20:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:38.707682 +0000 UTC m=+1059.820215392" watchObservedRunningTime="2025-11-25 20:44:38.709148559 +0000 UTC m=+1059.821681951" Nov 25 20:44:38 crc kubenswrapper[4983]: I1125 20:44:38.732360 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d94b4b49b-7bcmx" podStartSLOduration=2.341602242 podStartE2EDuration="4.732311252s" podCreationTimestamp="2025-11-25 20:44:34 +0000 UTC" firstStartedPulling="2025-11-25 20:44:35.122921744 +0000 UTC m=+1056.235455136" lastFinishedPulling="2025-11-25 20:44:37.513630754 +0000 UTC m=+1058.626164146" observedRunningTime="2025-11-25 20:44:38.72317903 +0000 UTC m=+1059.835712442" watchObservedRunningTime="2025-11-25 20:44:38.732311252 +0000 UTC m=+1059.844844644" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.658204 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.682112 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b69f99f98-bmrwt" event={"ID":"d239c72e-850f-45f1-9f9f-568c2bee1546","Type":"ContainerStarted","Data":"04f262e017f7c89eec1eb1ee34eb75a0f79347063dac415503114bd4b6875417"} Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.682187 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b69f99f98-bmrwt" event={"ID":"d239c72e-850f-45f1-9f9f-568c2bee1546","Type":"ContainerStarted","Data":"d9a00a502252e5069ec62a2da7014765bd55c17e178995fb878953de5ccc6062"} Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.682233 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.687649 4983 generic.go:334] "Generic (PLEG): container finished" podID="4fa168af-421e-4a45-8201-13eb69a20830" containerID="3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454" exitCode=0 Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.687707 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerDied","Data":"3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454"} Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.687788 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4fa168af-421e-4a45-8201-13eb69a20830","Type":"ContainerDied","Data":"557d7f63b96380c4263a5622c23548333b34c54a0f0182e27ee1806c02e54286"} Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.687814 4983 scope.go:117] "RemoveContainer" containerID="4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.688059 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.713796 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b69f99f98-bmrwt" podStartSLOduration=2.713766738 podStartE2EDuration="2.713766738s" podCreationTimestamp="2025-11-25 20:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:39.70327949 +0000 UTC m=+1060.815812872" watchObservedRunningTime="2025-11-25 20:44:39.713766738 +0000 UTC m=+1060.826300130" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.719198 4983 scope.go:117] "RemoveContainer" containerID="3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.748737 4983 scope.go:117] "RemoveContainer" containerID="3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.799959 4983 scope.go:117] "RemoveContainer" containerID="4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667" Nov 25 20:44:39 crc kubenswrapper[4983]: E1125 20:44:39.800720 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667\": container with ID starting with 4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667 not found: ID does not exist" containerID="4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.800762 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667"} err="failed to get container status \"4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667\": rpc error: code = NotFound desc = could not find container \"4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667\": container with ID starting with 4aeeb2cf26476e14dd27a4103a485b59ad76ec20ddce0771ea48aea3bb9e2667 not found: ID does not exist" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.800788 4983 scope.go:117] "RemoveContainer" containerID="3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908" Nov 25 20:44:39 crc kubenswrapper[4983]: E1125 20:44:39.801165 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908\": container with ID starting with 3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908 not found: ID does not exist" containerID="3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.801199 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908"} err="failed to get container status \"3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908\": rpc error: code = NotFound desc = could not find container \"3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908\": container with ID starting with 3bd24c50c0952fbb736fcb117717c6ce790bccb082c4525185f672c2a7e3e908 not found: ID does not exist" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.801221 4983 scope.go:117] "RemoveContainer" containerID="3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.804712 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-run-httpd\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.804757 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-sg-core-conf-yaml\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.804798 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-scripts\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.804890 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-combined-ca-bundle\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.805043 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q765t\" (UniqueName: \"kubernetes.io/projected/4fa168af-421e-4a45-8201-13eb69a20830-kube-api-access-q765t\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.805078 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-config-data\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.805148 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-log-httpd\") pod \"4fa168af-421e-4a45-8201-13eb69a20830\" (UID: \"4fa168af-421e-4a45-8201-13eb69a20830\") " Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.806419 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: E1125 20:44:39.809905 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454\": container with ID starting with 3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454 not found: ID does not exist" containerID="3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.809956 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454"} err="failed to get container status \"3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454\": rpc error: code = NotFound desc = could not find container \"3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454\": container with ID starting with 3334743637f4897062db584764a5b5b107deb46a912d0020423f94578038a454 not found: ID does not exist" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.810464 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.820305 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-scripts" (OuterVolumeSpecName: "scripts") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.841734 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa168af-421e-4a45-8201-13eb69a20830-kube-api-access-q765t" (OuterVolumeSpecName: "kube-api-access-q765t") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "kube-api-access-q765t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.852260 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.885725 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.909846 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.909885 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q765t\" (UniqueName: \"kubernetes.io/projected/4fa168af-421e-4a45-8201-13eb69a20830-kube-api-access-q765t\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.909897 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.909906 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4fa168af-421e-4a45-8201-13eb69a20830-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.909921 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.909929 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.913684 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-config-data" (OuterVolumeSpecName: "config-data") pod "4fa168af-421e-4a45-8201-13eb69a20830" (UID: "4fa168af-421e-4a45-8201-13eb69a20830"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.927451 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.927522 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.927593 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.928449 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02a7a7ce01bacff8c2eff18d797a1189b8fa10fb78c41ac31562d8f18df21be8"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:44:39 crc kubenswrapper[4983]: I1125 20:44:39.928506 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://02a7a7ce01bacff8c2eff18d797a1189b8fa10fb78c41ac31562d8f18df21be8" gracePeriod=600 Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.011049 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fa168af-421e-4a45-8201-13eb69a20830-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.049468 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.064231 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.086706 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:44:40 crc kubenswrapper[4983]: E1125 20:44:40.087273 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="ceilometer-notification-agent" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.087292 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="ceilometer-notification-agent" Nov 25 20:44:40 crc kubenswrapper[4983]: E1125 20:44:40.087306 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="sg-core" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.087313 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="sg-core" Nov 25 20:44:40 crc kubenswrapper[4983]: E1125 20:44:40.087355 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="proxy-httpd" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.087364 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="proxy-httpd" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.087613 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="sg-core" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.087640 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="proxy-httpd" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.087657 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa168af-421e-4a45-8201-13eb69a20830" containerName="ceilometer-notification-agent" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.090021 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.093523 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.093945 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.096245 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.112771 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74ss\" (UniqueName: \"kubernetes.io/projected/ce09d39b-1687-45a5-877c-a8e12876b41d-kube-api-access-l74ss\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.112829 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-log-httpd\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.112935 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-config-data\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.112981 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-run-httpd\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.113109 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.113168 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-scripts\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.113286 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.215761 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l74ss\" (UniqueName: \"kubernetes.io/projected/ce09d39b-1687-45a5-877c-a8e12876b41d-kube-api-access-l74ss\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.215811 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-log-httpd\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.215849 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-config-data\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.215871 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-run-httpd\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.215957 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.216150 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-scripts\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.216252 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.216504 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-log-httpd\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.217865 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-run-httpd\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.221548 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.222160 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-scripts\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.223418 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.229681 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-config-data\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.236697 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l74ss\" (UniqueName: \"kubernetes.io/projected/ce09d39b-1687-45a5-877c-a8e12876b41d-kube-api-access-l74ss\") pod \"ceilometer-0\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.428027 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.708714 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="02a7a7ce01bacff8c2eff18d797a1189b8fa10fb78c41ac31562d8f18df21be8" exitCode=0 Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.708801 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"02a7a7ce01bacff8c2eff18d797a1189b8fa10fb78c41ac31562d8f18df21be8"} Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.709288 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"564c2d7b04bb43d995119a30a67c66d1a1f25eab8467f75e61575755980ee6c6"} Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.709324 4983 scope.go:117] "RemoveContainer" containerID="332f27d6dcaee6d6f56ec3302fd09a3529205e5c94a5a306755d9476fe03353d" Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.715080 4983 generic.go:334] "Generic (PLEG): container finished" podID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" containerID="1335540bf91f4cafb344bc9dd59382aaadc51c67d9fc88c9019bd0cc3beceda5" exitCode=0 Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.715160 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dskv" event={"ID":"cca9d2b3-2f79-4d38-8427-f5bfae9fc314","Type":"ContainerDied","Data":"1335540bf91f4cafb344bc9dd59382aaadc51c67d9fc88c9019bd0cc3beceda5"} Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.717932 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:40 crc kubenswrapper[4983]: W1125 20:44:40.878395 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce09d39b_1687_45a5_877c_a8e12876b41d.slice/crio-f55ed61cae7f0002f11ae0f4f199e850120e22a5c9e41465541df3523fa47e20 WatchSource:0}: Error finding container f55ed61cae7f0002f11ae0f4f199e850120e22a5c9e41465541df3523fa47e20: Status 404 returned error can't find the container with id f55ed61cae7f0002f11ae0f4f199e850120e22a5c9e41465541df3523fa47e20 Nov 25 20:44:40 crc kubenswrapper[4983]: I1125 20:44:40.887758 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:44:41 crc kubenswrapper[4983]: I1125 20:44:41.639751 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa168af-421e-4a45-8201-13eb69a20830" path="/var/lib/kubelet/pods/4fa168af-421e-4a45-8201-13eb69a20830/volumes" Nov 25 20:44:41 crc kubenswrapper[4983]: I1125 20:44:41.731448 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerStarted","Data":"6763ee7940cbafd1392877d8adf31c8371a328ff766be19e31cba1f8a9b4554b"} Nov 25 20:44:41 crc kubenswrapper[4983]: I1125 20:44:41.731963 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerStarted","Data":"f55ed61cae7f0002f11ae0f4f199e850120e22a5c9e41465541df3523fa47e20"} Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.143908 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dskv" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165371 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-db-sync-config-data\") pod \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165483 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-combined-ca-bundle\") pod \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165507 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-config-data\") pod \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165529 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-scripts\") pod \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165602 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-etc-machine-id\") pod \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165683 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4wg\" (UniqueName: \"kubernetes.io/projected/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-kube-api-access-rj4wg\") pod \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\" (UID: \"cca9d2b3-2f79-4d38-8427-f5bfae9fc314\") " Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.165827 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cca9d2b3-2f79-4d38-8427-f5bfae9fc314" (UID: "cca9d2b3-2f79-4d38-8427-f5bfae9fc314"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.169946 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cca9d2b3-2f79-4d38-8427-f5bfae9fc314" (UID: "cca9d2b3-2f79-4d38-8427-f5bfae9fc314"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.170458 4983 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.170482 4983 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.171843 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-scripts" (OuterVolumeSpecName: "scripts") pod "cca9d2b3-2f79-4d38-8427-f5bfae9fc314" (UID: "cca9d2b3-2f79-4d38-8427-f5bfae9fc314"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.173783 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-kube-api-access-rj4wg" (OuterVolumeSpecName: "kube-api-access-rj4wg") pod "cca9d2b3-2f79-4d38-8427-f5bfae9fc314" (UID: "cca9d2b3-2f79-4d38-8427-f5bfae9fc314"). InnerVolumeSpecName "kube-api-access-rj4wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.243161 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cca9d2b3-2f79-4d38-8427-f5bfae9fc314" (UID: "cca9d2b3-2f79-4d38-8427-f5bfae9fc314"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.261093 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-config-data" (OuterVolumeSpecName: "config-data") pod "cca9d2b3-2f79-4d38-8427-f5bfae9fc314" (UID: "cca9d2b3-2f79-4d38-8427-f5bfae9fc314"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.273541 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.273602 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.273616 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.273628 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj4wg\" (UniqueName: \"kubernetes.io/projected/cca9d2b3-2f79-4d38-8427-f5bfae9fc314-kube-api-access-rj4wg\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.743241 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerStarted","Data":"9e9f64c133c3b4995305f553ab463f5a4bdd79548a19e8e9596e93fc36812d6c"} Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.746111 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7dskv" event={"ID":"cca9d2b3-2f79-4d38-8427-f5bfae9fc314","Type":"ContainerDied","Data":"333c8e95b17d65bf2c2d06f0cb2b3fb00ce33ec9d929fb96311c0c614d7ef4cf"} Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.746143 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333c8e95b17d65bf2c2d06f0cb2b3fb00ce33ec9d929fb96311c0c614d7ef4cf" Nov 25 20:44:42 crc kubenswrapper[4983]: I1125 20:44:42.746284 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7dskv" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.096986 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:43 crc kubenswrapper[4983]: E1125 20:44:43.097727 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" containerName="cinder-db-sync" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.097749 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" containerName="cinder-db-sync" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.097986 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" containerName="cinder-db-sync" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.099357 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.106378 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pmbck" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.106515 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.106610 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.106763 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.116047 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.204239 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ncx96"] Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.204677 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerName="dnsmasq-dns" containerID="cri-o://812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628" gracePeriod=10 Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.212837 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.250235 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-r25m6"] Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.251857 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.271989 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-r25m6"] Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300159 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4zn\" (UniqueName: \"kubernetes.io/projected/f3ab4993-1e73-4209-a907-0e4dd00708aa-kube-api-access-hs4zn\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300242 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300285 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300317 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300339 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300356 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpdp4\" (UniqueName: \"kubernetes.io/projected/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-kube-api-access-hpdp4\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300379 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300395 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300411 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300429 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300446 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-config\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.300496 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.378823 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.386236 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.390660 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.390867 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.405952 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406034 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406071 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406100 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406120 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpdp4\" (UniqueName: \"kubernetes.io/projected/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-kube-api-access-hpdp4\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406143 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406161 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406180 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406201 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406221 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-config\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406252 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.406291 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs4zn\" (UniqueName: \"kubernetes.io/projected/f3ab4993-1e73-4209-a907-0e4dd00708aa-kube-api-access-hs4zn\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.407303 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-config\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.408152 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.408796 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.409187 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.409699 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.410035 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.413150 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.415135 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-scripts\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.415462 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.416103 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.427147 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpdp4\" (UniqueName: \"kubernetes.io/projected/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-kube-api-access-hpdp4\") pod \"cinder-scheduler-0\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.427907 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs4zn\" (UniqueName: \"kubernetes.io/projected/f3ab4993-1e73-4209-a907-0e4dd00708aa-kube-api-access-hs4zn\") pod \"dnsmasq-dns-5c9776ccc5-r25m6\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.507882 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11e57219-e61d-4d00-9a39-66276747ea82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.507949 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.507979 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f524\" (UniqueName: \"kubernetes.io/projected/11e57219-e61d-4d00-9a39-66276747ea82-kube-api-access-9f524\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.507995 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data-custom\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.508023 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e57219-e61d-4d00-9a39-66276747ea82-logs\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.508053 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.508127 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-scripts\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.588968 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611060 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-scripts\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611154 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11e57219-e61d-4d00-9a39-66276747ea82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611230 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611303 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f524\" (UniqueName: \"kubernetes.io/projected/11e57219-e61d-4d00-9a39-66276747ea82-kube-api-access-9f524\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611328 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data-custom\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611355 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e57219-e61d-4d00-9a39-66276747ea82-logs\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611419 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.611224 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11e57219-e61d-4d00-9a39-66276747ea82-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.612380 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e57219-e61d-4d00-9a39-66276747ea82-logs\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.618156 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data-custom\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.618892 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-scripts\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.620118 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.623334 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.641281 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f524\" (UniqueName: \"kubernetes.io/projected/11e57219-e61d-4d00-9a39-66276747ea82-kube-api-access-9f524\") pod \"cinder-api-0\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.713575 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.725414 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.733415 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.795240 4983 generic.go:334] "Generic (PLEG): container finished" podID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerID="812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628" exitCode=0 Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.795364 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" event={"ID":"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4","Type":"ContainerDied","Data":"812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628"} Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.795404 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" event={"ID":"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4","Type":"ContainerDied","Data":"19fdb7627c002f0c879e1ee99ecabcaef1f7495e0be886d8d76cb5df96e002fd"} Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.795568 4983 scope.go:117] "RemoveContainer" containerID="812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.795763 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-ncx96" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.806069 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerStarted","Data":"4b620fb54090493d73e8f9746b45e4dfcd9711a21aedcacf63a3f75ee8336e82"} Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.859892 4983 scope.go:117] "RemoveContainer" containerID="8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.912649 4983 scope.go:117] "RemoveContainer" containerID="812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628" Nov 25 20:44:43 crc kubenswrapper[4983]: E1125 20:44:43.913161 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628\": container with ID starting with 812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628 not found: ID does not exist" containerID="812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.913187 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628"} err="failed to get container status \"812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628\": rpc error: code = NotFound desc = could not find container \"812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628\": container with ID starting with 812246ba1e438373a9d4ccb32ebb15c8c35a86935b8a0b0401129d903fb20628 not found: ID does not exist" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.913210 4983 scope.go:117] "RemoveContainer" containerID="8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10" Nov 25 20:44:43 crc kubenswrapper[4983]: E1125 20:44:43.913800 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10\": container with ID starting with 8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10 not found: ID does not exist" containerID="8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.913823 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10"} err="failed to get container status \"8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10\": rpc error: code = NotFound desc = could not find container \"8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10\": container with ID starting with 8acb0f28b62cd9251b3cba553421364e2b8556eff7e6d2c1cffe0bb893e34f10 not found: ID does not exist" Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.919983 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-sb\") pod \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.920096 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-config\") pod \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.920136 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-svc\") pod \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.920211 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcxrn\" (UniqueName: \"kubernetes.io/projected/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-kube-api-access-jcxrn\") pod \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.920242 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-swift-storage-0\") pod \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.921397 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-nb\") pod \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\" (UID: \"f8bbe420-76df-4aa7-a5ff-2ff482ba93b4\") " Nov 25 20:44:43 crc kubenswrapper[4983]: I1125 20:44:43.971828 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-kube-api-access-jcxrn" (OuterVolumeSpecName: "kube-api-access-jcxrn") pod "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" (UID: "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4"). InnerVolumeSpecName "kube-api-access-jcxrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.027906 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcxrn\" (UniqueName: \"kubernetes.io/projected/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-kube-api-access-jcxrn\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.075888 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" (UID: "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.108203 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-r25m6"] Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.117832 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-config" (OuterVolumeSpecName: "config") pod "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" (UID: "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.119854 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" (UID: "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.124741 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" (UID: "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.130090 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.130130 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.130143 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.130155 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.140127 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" (UID: "f8bbe420-76df-4aa7-a5ff-2ff482ba93b4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.231642 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.361320 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.450656 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ncx96"] Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.470785 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-ncx96"] Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.482910 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:44 crc kubenswrapper[4983]: W1125 20:44:44.483075 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c69ca86_c8b5_4079_aae9_5a6f14eb7ee2.slice/crio-20bc3a86648fe25cce386322345df182e5a152b602a1a334a127518028e398e7 WatchSource:0}: Error finding container 20bc3a86648fe25cce386322345df182e5a152b602a1a334a127518028e398e7: Status 404 returned error can't find the container with id 20bc3a86648fe25cce386322345df182e5a152b602a1a334a127518028e398e7 Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.704901 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.835328 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11e57219-e61d-4d00-9a39-66276747ea82","Type":"ContainerStarted","Data":"0c7a60a9f44f7286bf68ba8b58de9ef3c91012f185114a4b82f1658cc68d0f78"} Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.837836 4983 generic.go:334] "Generic (PLEG): container finished" podID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerID="844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a" exitCode=0 Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.837913 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" event={"ID":"f3ab4993-1e73-4209-a907-0e4dd00708aa","Type":"ContainerDied","Data":"844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a"} Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.837938 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" event={"ID":"f3ab4993-1e73-4209-a907-0e4dd00708aa","Type":"ContainerStarted","Data":"7826a96d18780ce9a07f310ca1c938d146bb5cc6dbce83c0590eb79cef8f8a4a"} Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.845756 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2","Type":"ContainerStarted","Data":"20bc3a86648fe25cce386322345df182e5a152b602a1a334a127518028e398e7"} Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.869642 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerStarted","Data":"fd864ebbd42773e691617327de00f7b1f14e7bd41c00ed75fe64da74ff149809"} Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.870862 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:44:44 crc kubenswrapper[4983]: I1125 20:44:44.924827 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.649091931 podStartE2EDuration="4.924806854s" podCreationTimestamp="2025-11-25 20:44:40 +0000 UTC" firstStartedPulling="2025-11-25 20:44:40.880934492 +0000 UTC m=+1061.993467884" lastFinishedPulling="2025-11-25 20:44:44.156649415 +0000 UTC m=+1065.269182807" observedRunningTime="2025-11-25 20:44:44.909644923 +0000 UTC m=+1066.022178315" watchObservedRunningTime="2025-11-25 20:44:44.924806854 +0000 UTC m=+1066.037340246" Nov 25 20:44:45 crc kubenswrapper[4983]: I1125 20:44:45.410265 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:45 crc kubenswrapper[4983]: I1125 20:44:45.639515 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" path="/var/lib/kubelet/pods/f8bbe420-76df-4aa7-a5ff-2ff482ba93b4/volumes" Nov 25 20:44:45 crc kubenswrapper[4983]: I1125 20:44:45.958785 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" event={"ID":"f3ab4993-1e73-4209-a907-0e4dd00708aa","Type":"ContainerStarted","Data":"b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e"} Nov 25 20:44:45 crc kubenswrapper[4983]: I1125 20:44:45.959101 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:45 crc kubenswrapper[4983]: I1125 20:44:45.975981 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11e57219-e61d-4d00-9a39-66276747ea82","Type":"ContainerStarted","Data":"176adebbbdb058104ae2f7fa1a5aea8af5c6adbdce422a25b0c6329c94f235e0"} Nov 25 20:44:45 crc kubenswrapper[4983]: I1125 20:44:45.984925 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" podStartSLOduration=2.984908554 podStartE2EDuration="2.984908554s" podCreationTimestamp="2025-11-25 20:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:45.983975639 +0000 UTC m=+1067.096509031" watchObservedRunningTime="2025-11-25 20:44:45.984908554 +0000 UTC m=+1067.097441946" Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.836512 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.988292 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11e57219-e61d-4d00-9a39-66276747ea82","Type":"ContainerStarted","Data":"29eeb441878fb015c597ffc632d17650b5695be6061bb569dcda612a581e1f7a"} Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.988493 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api-log" containerID="cri-o://176adebbbdb058104ae2f7fa1a5aea8af5c6adbdce422a25b0c6329c94f235e0" gracePeriod=30 Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.988627 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api" containerID="cri-o://29eeb441878fb015c597ffc632d17650b5695be6061bb569dcda612a581e1f7a" gracePeriod=30 Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.988832 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.992173 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2","Type":"ContainerStarted","Data":"16bd972803d7eb018427c7f12b6bf4254b3289e22de4aedfb9240a33b0d0d09d"} Nov 25 20:44:46 crc kubenswrapper[4983]: I1125 20:44:46.992212 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2","Type":"ContainerStarted","Data":"e44008a3f8d4c6b8e6ee7a31c47a1aecf35016109b75fe7ecd3c72c2ca0bce49"} Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.021315 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.021296854 podStartE2EDuration="4.021296854s" podCreationTimestamp="2025-11-25 20:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:47.013804266 +0000 UTC m=+1068.126337658" watchObservedRunningTime="2025-11-25 20:44:47.021296854 +0000 UTC m=+1068.133830236" Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.037362 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.164173779 podStartE2EDuration="4.037340239s" podCreationTimestamp="2025-11-25 20:44:43 +0000 UTC" firstStartedPulling="2025-11-25 20:44:44.516680308 +0000 UTC m=+1065.629213700" lastFinishedPulling="2025-11-25 20:44:45.389846768 +0000 UTC m=+1066.502380160" observedRunningTime="2025-11-25 20:44:47.03096167 +0000 UTC m=+1068.143495062" watchObservedRunningTime="2025-11-25 20:44:47.037340239 +0000 UTC m=+1068.149873631" Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.301629 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-778b5c8885-ww4ht" Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.385396 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f4b565868-4nbfx"] Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.385648 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f4b565868-4nbfx" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-api" containerID="cri-o://e826521c79b32648163fd2fd07947722d89b658ebeb52d8656d5304037b3459c" gracePeriod=30 Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.386040 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f4b565868-4nbfx" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-httpd" containerID="cri-o://ffc4d455db9d3451305faa8382d3138c5bcc09749801fa2664f8a6e99053562b" gracePeriod=30 Nov 25 20:44:47 crc kubenswrapper[4983]: I1125 20:44:47.933078 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.031943 4983 generic.go:334] "Generic (PLEG): container finished" podID="11e57219-e61d-4d00-9a39-66276747ea82" containerID="29eeb441878fb015c597ffc632d17650b5695be6061bb569dcda612a581e1f7a" exitCode=0 Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.031975 4983 generic.go:334] "Generic (PLEG): container finished" podID="11e57219-e61d-4d00-9a39-66276747ea82" containerID="176adebbbdb058104ae2f7fa1a5aea8af5c6adbdce422a25b0c6329c94f235e0" exitCode=143 Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.032016 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11e57219-e61d-4d00-9a39-66276747ea82","Type":"ContainerDied","Data":"29eeb441878fb015c597ffc632d17650b5695be6061bb569dcda612a581e1f7a"} Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.032043 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11e57219-e61d-4d00-9a39-66276747ea82","Type":"ContainerDied","Data":"176adebbbdb058104ae2f7fa1a5aea8af5c6adbdce422a25b0c6329c94f235e0"} Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.032053 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11e57219-e61d-4d00-9a39-66276747ea82","Type":"ContainerDied","Data":"0c7a60a9f44f7286bf68ba8b58de9ef3c91012f185114a4b82f1658cc68d0f78"} Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.032066 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c7a60a9f44f7286bf68ba8b58de9ef3c91012f185114a4b82f1658cc68d0f78" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.047341 4983 generic.go:334] "Generic (PLEG): container finished" podID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerID="ffc4d455db9d3451305faa8382d3138c5bcc09749801fa2664f8a6e99053562b" exitCode=0 Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.048184 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4b565868-4nbfx" event={"ID":"fc504575-1f16-42bd-bc6d-b4a9f16bc15c","Type":"ContainerDied","Data":"ffc4d455db9d3451305faa8382d3138c5bcc09749801fa2664f8a6e99053562b"} Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.092887 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.269589 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f524\" (UniqueName: \"kubernetes.io/projected/11e57219-e61d-4d00-9a39-66276747ea82-kube-api-access-9f524\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.269724 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data-custom\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.269755 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.269848 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-scripts\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.269891 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e57219-e61d-4d00-9a39-66276747ea82-logs\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.269968 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-combined-ca-bundle\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.270057 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11e57219-e61d-4d00-9a39-66276747ea82-etc-machine-id\") pod \"11e57219-e61d-4d00-9a39-66276747ea82\" (UID: \"11e57219-e61d-4d00-9a39-66276747ea82\") " Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.270588 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11e57219-e61d-4d00-9a39-66276747ea82-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.272727 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e57219-e61d-4d00-9a39-66276747ea82-logs" (OuterVolumeSpecName: "logs") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.286764 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.298833 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e57219-e61d-4d00-9a39-66276747ea82-kube-api-access-9f524" (OuterVolumeSpecName: "kube-api-access-9f524") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "kube-api-access-9f524". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.298924 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-scripts" (OuterVolumeSpecName: "scripts") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.371259 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f524\" (UniqueName: \"kubernetes.io/projected/11e57219-e61d-4d00-9a39-66276747ea82-kube-api-access-9f524\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.371293 4983 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.371317 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.371326 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11e57219-e61d-4d00-9a39-66276747ea82-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.371334 4983 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11e57219-e61d-4d00-9a39-66276747ea82-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.388431 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data" (OuterVolumeSpecName: "config-data") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.393785 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11e57219-e61d-4d00-9a39-66276747ea82" (UID: "11e57219-e61d-4d00-9a39-66276747ea82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.472716 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.473029 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11e57219-e61d-4d00-9a39-66276747ea82-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.725662 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 20:44:48 crc kubenswrapper[4983]: I1125 20:44:48.811546 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.056280 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.125212 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.148623 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.161610 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:49 crc kubenswrapper[4983]: E1125 20:44:49.162111 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api-log" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162126 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api-log" Nov 25 20:44:49 crc kubenswrapper[4983]: E1125 20:44:49.162164 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerName="dnsmasq-dns" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162173 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerName="dnsmasq-dns" Nov 25 20:44:49 crc kubenswrapper[4983]: E1125 20:44:49.162189 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162197 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api" Nov 25 20:44:49 crc kubenswrapper[4983]: E1125 20:44:49.162214 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerName="init" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162221 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerName="init" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162426 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api-log" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162448 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e57219-e61d-4d00-9a39-66276747ea82" containerName="cinder-api" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.162465 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8bbe420-76df-4aa7-a5ff-2ff482ba93b4" containerName="dnsmasq-dns" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.163835 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.171501 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.171701 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.171817 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.182320 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.297718 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-config-data-custom\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298116 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af04822c-335c-4c44-9711-19c401c54c9f-logs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298176 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-scripts\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298198 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298230 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298261 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhdph\" (UniqueName: \"kubernetes.io/projected/af04822c-335c-4c44-9711-19c401c54c9f-kube-api-access-mhdph\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298317 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298365 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af04822c-335c-4c44-9711-19c401c54c9f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.298397 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-config-data\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.400348 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-config-data-custom\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.400404 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af04822c-335c-4c44-9711-19c401c54c9f-logs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401036 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af04822c-335c-4c44-9711-19c401c54c9f-logs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.400487 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-scripts\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401119 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401575 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401614 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhdph\" (UniqueName: \"kubernetes.io/projected/af04822c-335c-4c44-9711-19c401c54c9f-kube-api-access-mhdph\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401665 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401703 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af04822c-335c-4c44-9711-19c401c54c9f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.401743 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-config-data\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.402274 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af04822c-335c-4c44-9711-19c401c54c9f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.412393 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.413155 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-config-data-custom\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.413688 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.414419 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-scripts\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.414532 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-config-data\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.422011 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af04822c-335c-4c44-9711-19c401c54c9f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.438143 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhdph\" (UniqueName: \"kubernetes.io/projected/af04822c-335c-4c44-9711-19c401c54c9f-kube-api-access-mhdph\") pod \"cinder-api-0\" (UID: \"af04822c-335c-4c44-9711-19c401c54c9f\") " pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.527316 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.617973 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e57219-e61d-4d00-9a39-66276747ea82" path="/var/lib/kubelet/pods/11e57219-e61d-4d00-9a39-66276747ea82/volumes" Nov 25 20:44:49 crc kubenswrapper[4983]: I1125 20:44:49.822439 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:44:50 crc kubenswrapper[4983]: I1125 20:44:50.050001 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:50 crc kubenswrapper[4983]: I1125 20:44:50.710297 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 20:44:50 crc kubenswrapper[4983]: I1125 20:44:50.742928 4983 generic.go:334] "Generic (PLEG): container finished" podID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerID="e826521c79b32648163fd2fd07947722d89b658ebeb52d8656d5304037b3459c" exitCode=0 Nov 25 20:44:50 crc kubenswrapper[4983]: I1125 20:44:50.743332 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4b565868-4nbfx" event={"ID":"fc504575-1f16-42bd-bc6d-b4a9f16bc15c","Type":"ContainerDied","Data":"e826521c79b32648163fd2fd07947722d89b658ebeb52d8656d5304037b3459c"} Nov 25 20:44:50 crc kubenswrapper[4983]: W1125 20:44:50.743908 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf04822c_335c_4c44_9711_19c401c54c9f.slice/crio-b6ed0dfde2db21816551a142f947785724d1312721c8f6eb0362aefccbd7cc1d WatchSource:0}: Error finding container b6ed0dfde2db21816551a142f947785724d1312721c8f6eb0362aefccbd7cc1d: Status 404 returned error can't find the container with id b6ed0dfde2db21816551a142f947785724d1312721c8f6eb0362aefccbd7cc1d Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.094623 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.247296 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-httpd-config\") pod \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.247413 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-config\") pod \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.248535 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-combined-ca-bundle\") pod \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.248633 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxvzb\" (UniqueName: \"kubernetes.io/projected/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-kube-api-access-qxvzb\") pod \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.248682 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-ovndb-tls-certs\") pod \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\" (UID: \"fc504575-1f16-42bd-bc6d-b4a9f16bc15c\") " Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.259370 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-kube-api-access-qxvzb" (OuterVolumeSpecName: "kube-api-access-qxvzb") pod "fc504575-1f16-42bd-bc6d-b4a9f16bc15c" (UID: "fc504575-1f16-42bd-bc6d-b4a9f16bc15c"). InnerVolumeSpecName "kube-api-access-qxvzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.268261 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fc504575-1f16-42bd-bc6d-b4a9f16bc15c" (UID: "fc504575-1f16-42bd-bc6d-b4a9f16bc15c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.313845 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-config" (OuterVolumeSpecName: "config") pod "fc504575-1f16-42bd-bc6d-b4a9f16bc15c" (UID: "fc504575-1f16-42bd-bc6d-b4a9f16bc15c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.338702 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fc504575-1f16-42bd-bc6d-b4a9f16bc15c" (UID: "fc504575-1f16-42bd-bc6d-b4a9f16bc15c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.350945 4983 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.350977 4983 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.350988 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.351000 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxvzb\" (UniqueName: \"kubernetes.io/projected/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-kube-api-access-qxvzb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.361076 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc504575-1f16-42bd-bc6d-b4a9f16bc15c" (UID: "fc504575-1f16-42bd-bc6d-b4a9f16bc15c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.453919 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc504575-1f16-42bd-bc6d-b4a9f16bc15c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.629982 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b69f99f98-bmrwt" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.693794 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b9ff8df78-mv2g6"] Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.694587 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b9ff8df78-mv2g6" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api" containerID="cri-o://cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c" gracePeriod=30 Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.694183 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6b9ff8df78-mv2g6" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api-log" containerID="cri-o://28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c" gracePeriod=30 Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.749296 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f9d7c8cfb-s259l" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.810257 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af04822c-335c-4c44-9711-19c401c54c9f","Type":"ContainerStarted","Data":"de2275a2f1ca27a273f71cba9777dcae69a6ef3bf36d7edd1a7edeb5b291d5ba"} Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.810306 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af04822c-335c-4c44-9711-19c401c54c9f","Type":"ContainerStarted","Data":"b6ed0dfde2db21816551a142f947785724d1312721c8f6eb0362aefccbd7cc1d"} Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.842032 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-746b6775bd-26zqf"] Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.842287 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon-log" containerID="cri-o://258950fcf68cd7c9df940549ae0451b1b9f70f389d65405e3ab17da233c2b00c" gracePeriod=30 Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.842833 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" containerID="cri-o://9598658aedba74555877ee6f6068a0ccf9b04456d13ea1fee47bfe7f3e7437f7" gracePeriod=30 Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.853185 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f4b565868-4nbfx" event={"ID":"fc504575-1f16-42bd-bc6d-b4a9f16bc15c","Type":"ContainerDied","Data":"504dd4d7a2db8920f7fb291befbc0b81912111f12abf94dbceee81a53c7a7d34"} Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.853248 4983 scope.go:117] "RemoveContainer" containerID="ffc4d455db9d3451305faa8382d3138c5bcc09749801fa2664f8a6e99053562b" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.853345 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f4b565868-4nbfx" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.890158 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.964710 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f4b565868-4nbfx"] Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.969449 4983 scope.go:117] "RemoveContainer" containerID="e826521c79b32648163fd2fd07947722d89b658ebeb52d8656d5304037b3459c" Nov 25 20:44:51 crc kubenswrapper[4983]: I1125 20:44:51.992864 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f4b565868-4nbfx"] Nov 25 20:44:52 crc kubenswrapper[4983]: I1125 20:44:52.865778 4983 generic.go:334] "Generic (PLEG): container finished" podID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerID="28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c" exitCode=143 Nov 25 20:44:52 crc kubenswrapper[4983]: I1125 20:44:52.865865 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9ff8df78-mv2g6" event={"ID":"1d2f7584-5f22-4f5a-a58f-8856c28e913c","Type":"ContainerDied","Data":"28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c"} Nov 25 20:44:52 crc kubenswrapper[4983]: I1125 20:44:52.868159 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"af04822c-335c-4c44-9711-19c401c54c9f","Type":"ContainerStarted","Data":"0149d80a6f47535fe53c64e920fc5ff51ac8b8a2fb36d00dac3329a0faab4161"} Nov 25 20:44:52 crc kubenswrapper[4983]: I1125 20:44:52.868346 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 20:44:52 crc kubenswrapper[4983]: I1125 20:44:52.888769 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.888755542 podStartE2EDuration="3.888755542s" podCreationTimestamp="2025-11-25 20:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:44:52.88606392 +0000 UTC m=+1073.998597312" watchObservedRunningTime="2025-11-25 20:44:52.888755542 +0000 UTC m=+1074.001288934" Nov 25 20:44:53 crc kubenswrapper[4983]: I1125 20:44:53.590866 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:44:53 crc kubenswrapper[4983]: I1125 20:44:53.618591 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" path="/var/lib/kubelet/pods/fc504575-1f16-42bd-bc6d-b4a9f16bc15c/volumes" Nov 25 20:44:53 crc kubenswrapper[4983]: I1125 20:44:53.658755 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t98c5"] Nov 25 20:44:53 crc kubenswrapper[4983]: I1125 20:44:53.659033 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" podUID="850af343-20d4-4033-9414-c22c6a180ffa" containerName="dnsmasq-dns" containerID="cri-o://0a7efe5fad316c68333788e31ea6ec4d8c2d6414a4552d2f357f0530e7d9478a" gracePeriod=10 Nov 25 20:44:53 crc kubenswrapper[4983]: I1125 20:44:53.881075 4983 generic.go:334] "Generic (PLEG): container finished" podID="850af343-20d4-4033-9414-c22c6a180ffa" containerID="0a7efe5fad316c68333788e31ea6ec4d8c2d6414a4552d2f357f0530e7d9478a" exitCode=0 Nov 25 20:44:53 crc kubenswrapper[4983]: I1125 20:44:53.881151 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" event={"ID":"850af343-20d4-4033-9414-c22c6a180ffa","Type":"ContainerDied","Data":"0a7efe5fad316c68333788e31ea6ec4d8c2d6414a4552d2f357f0530e7d9478a"} Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:53.998854 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.075709 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.212467 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.306521 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-svc\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.306664 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvp8v\" (UniqueName: \"kubernetes.io/projected/850af343-20d4-4033-9414-c22c6a180ffa-kube-api-access-bvp8v\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.306687 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-sb\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.306753 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-config\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.306908 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-swift-storage-0\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.306978 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.316769 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850af343-20d4-4033-9414-c22c6a180ffa-kube-api-access-bvp8v" (OuterVolumeSpecName: "kube-api-access-bvp8v") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa"). InnerVolumeSpecName "kube-api-access-bvp8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.358233 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.358313 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.361080 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-config" (OuterVolumeSpecName: "config") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:54 crc kubenswrapper[4983]: E1125 20:44:54.367431 4983 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb podName:850af343-20d4-4033-9414-c22c6a180ffa nodeName:}" failed. No retries permitted until 2025-11-25 20:44:54.867401482 +0000 UTC m=+1075.979934874 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-nb" (UniqueName: "kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa") : error deleting /var/lib/kubelet/pods/850af343-20d4-4033-9414-c22c6a180ffa/volume-subpaths: remove /var/lib/kubelet/pods/850af343-20d4-4033-9414-c22c6a180ffa/volume-subpaths: no such file or directory Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.367763 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.408912 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.408941 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.408950 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvp8v\" (UniqueName: \"kubernetes.io/projected/850af343-20d4-4033-9414-c22c6a180ffa-kube-api-access-bvp8v\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.408962 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.408973 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.892329 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.892379 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-t98c5" event={"ID":"850af343-20d4-4033-9414-c22c6a180ffa","Type":"ContainerDied","Data":"43f95792276c5d7a744a43570a6d7fdc6afa8bd7d603bae8284333b63d1fa9f2"} Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.892415 4983 scope.go:117] "RemoveContainer" containerID="0a7efe5fad316c68333788e31ea6ec4d8c2d6414a4552d2f357f0530e7d9478a" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.892466 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="cinder-scheduler" containerID="cri-o://e44008a3f8d4c6b8e6ee7a31c47a1aecf35016109b75fe7ecd3c72c2ca0bce49" gracePeriod=30 Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.892601 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="probe" containerID="cri-o://16bd972803d7eb018427c7f12b6bf4254b3289e22de4aedfb9240a33b0d0d09d" gracePeriod=30 Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.919013 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb\") pod \"850af343-20d4-4033-9414-c22c6a180ffa\" (UID: \"850af343-20d4-4033-9414-c22c6a180ffa\") " Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.919976 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "850af343-20d4-4033-9414-c22c6a180ffa" (UID: "850af343-20d4-4033-9414-c22c6a180ffa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.936848 4983 scope.go:117] "RemoveContainer" containerID="bbef66633a09a8602bb6cde9b9aa8414c7af7155b6f1159e794f3d3631d7b200" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.963230 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9ff8df78-mv2g6" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Nov 25 20:44:54 crc kubenswrapper[4983]: I1125 20:44:54.963313 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6b9ff8df78-mv2g6" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.021504 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/850af343-20d4-4033-9414-c22c6a180ffa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.249642 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t98c5"] Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.259920 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-t98c5"] Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.355123 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.531732 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbwq6\" (UniqueName: \"kubernetes.io/projected/1d2f7584-5f22-4f5a-a58f-8856c28e913c-kube-api-access-mbwq6\") pod \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.531811 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data\") pod \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.532069 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2f7584-5f22-4f5a-a58f-8856c28e913c-logs\") pod \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.532102 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data-custom\") pod \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.532161 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-combined-ca-bundle\") pod \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\" (UID: \"1d2f7584-5f22-4f5a-a58f-8856c28e913c\") " Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.533082 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d2f7584-5f22-4f5a-a58f-8856c28e913c-logs" (OuterVolumeSpecName: "logs") pod "1d2f7584-5f22-4f5a-a58f-8856c28e913c" (UID: "1d2f7584-5f22-4f5a-a58f-8856c28e913c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.539857 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d2f7584-5f22-4f5a-a58f-8856c28e913c" (UID: "1d2f7584-5f22-4f5a-a58f-8856c28e913c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.540736 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d2f7584-5f22-4f5a-a58f-8856c28e913c-kube-api-access-mbwq6" (OuterVolumeSpecName: "kube-api-access-mbwq6") pod "1d2f7584-5f22-4f5a-a58f-8856c28e913c" (UID: "1d2f7584-5f22-4f5a-a58f-8856c28e913c"). InnerVolumeSpecName "kube-api-access-mbwq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.566904 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d2f7584-5f22-4f5a-a58f-8856c28e913c" (UID: "1d2f7584-5f22-4f5a-a58f-8856c28e913c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.586735 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data" (OuterVolumeSpecName: "config-data") pod "1d2f7584-5f22-4f5a-a58f-8856c28e913c" (UID: "1d2f7584-5f22-4f5a-a58f-8856c28e913c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.636856 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850af343-20d4-4033-9414-c22c6a180ffa" path="/var/lib/kubelet/pods/850af343-20d4-4033-9414-c22c6a180ffa/volumes" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.638198 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbwq6\" (UniqueName: \"kubernetes.io/projected/1d2f7584-5f22-4f5a-a58f-8856c28e913c-kube-api-access-mbwq6\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.638220 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.638231 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d2f7584-5f22-4f5a-a58f-8856c28e913c-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.638253 4983 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.638265 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d2f7584-5f22-4f5a-a58f-8856c28e913c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.920059 4983 generic.go:334] "Generic (PLEG): container finished" podID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerID="16bd972803d7eb018427c7f12b6bf4254b3289e22de4aedfb9240a33b0d0d09d" exitCode=0 Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.920422 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2","Type":"ContainerDied","Data":"16bd972803d7eb018427c7f12b6bf4254b3289e22de4aedfb9240a33b0d0d09d"} Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.929845 4983 generic.go:334] "Generic (PLEG): container finished" podID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerID="cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c" exitCode=0 Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.929928 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6b9ff8df78-mv2g6" Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.929941 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9ff8df78-mv2g6" event={"ID":"1d2f7584-5f22-4f5a-a58f-8856c28e913c","Type":"ContainerDied","Data":"cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c"} Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.929979 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6b9ff8df78-mv2g6" event={"ID":"1d2f7584-5f22-4f5a-a58f-8856c28e913c","Type":"ContainerDied","Data":"f14573187a45e69eb80e219be60a7612edac67bad38d38eed26f8819a02d95e6"} Nov 25 20:44:55 crc kubenswrapper[4983]: I1125 20:44:55.930001 4983 scope.go:117] "RemoveContainer" containerID="cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.004269 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6b9ff8df78-mv2g6"] Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.013033 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6b9ff8df78-mv2g6"] Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.019168 4983 scope.go:117] "RemoveContainer" containerID="28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.038177 4983 scope.go:117] "RemoveContainer" containerID="cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c" Nov 25 20:44:56 crc kubenswrapper[4983]: E1125 20:44:56.038628 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c\": container with ID starting with cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c not found: ID does not exist" containerID="cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.038672 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c"} err="failed to get container status \"cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c\": rpc error: code = NotFound desc = could not find container \"cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c\": container with ID starting with cc27521dc0ee62d53eabeeab65dfc05d68736b7e72abacf16f7748e2f4d8b15c not found: ID does not exist" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.038702 4983 scope.go:117] "RemoveContainer" containerID="28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c" Nov 25 20:44:56 crc kubenswrapper[4983]: E1125 20:44:56.039870 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c\": container with ID starting with 28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c not found: ID does not exist" containerID="28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.039902 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c"} err="failed to get container status \"28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c\": rpc error: code = NotFound desc = could not find container \"28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c\": container with ID starting with 28c1f10f5ed472876feb7332029239b354e19f6b52f254bafcd65cf8fc88e77c not found: ID does not exist" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.065178 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.067116 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ffcbfd47b-hljtd" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.443729 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:54140->10.217.0.149:8443: read: connection reset by peer" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.444332 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.700407 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7b58ff8778-fz55h" Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.945467 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerID="9598658aedba74555877ee6f6068a0ccf9b04456d13ea1fee47bfe7f3e7437f7" exitCode=0 Nov 25 20:44:56 crc kubenswrapper[4983]: I1125 20:44:56.945542 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746b6775bd-26zqf" event={"ID":"1ac04518-4a47-43b3-8e9f-84e8f3a80648","Type":"ContainerDied","Data":"9598658aedba74555877ee6f6068a0ccf9b04456d13ea1fee47bfe7f3e7437f7"} Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030201 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.030535 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850af343-20d4-4033-9414-c22c6a180ffa" containerName="dnsmasq-dns" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030565 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="850af343-20d4-4033-9414-c22c6a180ffa" containerName="dnsmasq-dns" Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.030584 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850af343-20d4-4033-9414-c22c6a180ffa" containerName="init" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030591 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="850af343-20d4-4033-9414-c22c6a180ffa" containerName="init" Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.030616 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-httpd" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030622 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-httpd" Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.030636 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030642 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api" Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.030654 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-api" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030662 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-api" Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.030670 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api-log" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030676 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api-log" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030890 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030911 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-api" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030919 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc504575-1f16-42bd-bc6d-b4a9f16bc15c" containerName="neutron-httpd" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030926 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="850af343-20d4-4033-9414-c22c6a180ffa" containerName="dnsmasq-dns" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.030943 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" containerName="barbican-api-log" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.031644 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.036258 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-x6tpj" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.036385 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.036514 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.054316 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.166873 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rs49\" (UniqueName: \"kubernetes.io/projected/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-kube-api-access-4rs49\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.166942 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.167037 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config-secret\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.167216 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.268928 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.269012 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rs49\" (UniqueName: \"kubernetes.io/projected/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-kube-api-access-4rs49\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.269050 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.269100 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config-secret\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.270047 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.278342 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config-secret\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.293530 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.308313 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rs49\" (UniqueName: \"kubernetes.io/projected/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-kube-api-access-4rs49\") pod \"openstackclient\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.359793 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.511379 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.534540 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.542710 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.552437 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.552540 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.618020 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d2f7584-5f22-4f5a-a58f-8856c28e913c" path="/var/lib/kubelet/pods/1d2f7584-5f22-4f5a-a58f-8856c28e913c/volumes" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.687272 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.687841 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-openstack-config\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.688089 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.688143 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngxb4\" (UniqueName: \"kubernetes.io/projected/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-kube-api-access-ngxb4\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.790289 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.790362 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngxb4\" (UniqueName: \"kubernetes.io/projected/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-kube-api-access-ngxb4\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.790453 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.790486 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-openstack-config\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.793335 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-openstack-config\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.799843 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.805227 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-openstack-config-secret\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.815815 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngxb4\" (UniqueName: \"kubernetes.io/projected/3b2fefe1-596f-4e7c-8de9-b3c019ed40ea-kube-api-access-ngxb4\") pod \"openstackclient\" (UID: \"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea\") " pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.887886 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.888747 4983 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 20:44:57 crc kubenswrapper[4983]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ffceaf3f-86a3-414f-b2eb-3a730bbdc96a_0(efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3" Netns:"/var/run/netns/f614f81a-69a1-407f-bb91-938abe2c3a32" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3;K8S_POD_UID=ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3 network default NAD default] [openstack/openstackclient efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:a9 [10.217.0.169/23] Nov 25 20:44:57 crc kubenswrapper[4983]: ' Nov 25 20:44:57 crc kubenswrapper[4983]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 20:44:57 crc kubenswrapper[4983]: > Nov 25 20:44:57 crc kubenswrapper[4983]: E1125 20:44:57.888833 4983 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 20:44:57 crc kubenswrapper[4983]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_ffceaf3f-86a3-414f-b2eb-3a730bbdc96a_0(efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3" Netns:"/var/run/netns/f614f81a-69a1-407f-bb91-938abe2c3a32" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3;K8S_POD_UID=ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3 network default NAD default] [openstack/openstackclient efb6ddab0d9c848aeb10694e28eb559b1949a396da5126b7db0d87e42ceafde3 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:a9 [10.217.0.169/23] Nov 25 20:44:57 crc kubenswrapper[4983]: ' Nov 25 20:44:57 crc kubenswrapper[4983]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 20:44:57 crc kubenswrapper[4983]: > pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.957365 4983 generic.go:334] "Generic (PLEG): container finished" podID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerID="e44008a3f8d4c6b8e6ee7a31c47a1aecf35016109b75fe7ecd3c72c2ca0bce49" exitCode=0 Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.957529 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.957572 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2","Type":"ContainerDied","Data":"e44008a3f8d4c6b8e6ee7a31c47a1aecf35016109b75fe7ecd3c72c2ca0bce49"} Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.973523 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:57 crc kubenswrapper[4983]: I1125 20:44:57.978197 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" podUID="3b2fefe1-596f-4e7c-8de9-b3c019ed40ea" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.023076 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.108705 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpdp4\" (UniqueName: \"kubernetes.io/projected/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-kube-api-access-hpdp4\") pod \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.108758 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rs49\" (UniqueName: \"kubernetes.io/projected/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-kube-api-access-4rs49\") pod \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.108821 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-scripts\") pod \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.108927 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-etc-machine-id\") pod \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.108951 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-combined-ca-bundle\") pod \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.109017 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data\") pod \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.109179 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-combined-ca-bundle\") pod \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.109216 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config-secret\") pod \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.109243 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config\") pod \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\" (UID: \"ffceaf3f-86a3-414f-b2eb-3a730bbdc96a\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.109307 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" (UID: "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.110164 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data-custom\") pod \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\" (UID: \"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2\") " Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.116158 4983 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.117585 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" (UID: "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.117877 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" (UID: "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.122762 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-scripts" (OuterVolumeSpecName: "scripts") pod "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" (UID: "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.122837 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" (UID: "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.122917 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-kube-api-access-hpdp4" (OuterVolumeSpecName: "kube-api-access-hpdp4") pod "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" (UID: "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2"). InnerVolumeSpecName "kube-api-access-hpdp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.122941 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" (UID: "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.124593 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-kube-api-access-4rs49" (OuterVolumeSpecName: "kube-api-access-4rs49") pod "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" (UID: "ffceaf3f-86a3-414f-b2eb-3a730bbdc96a"). InnerVolumeSpecName "kube-api-access-4rs49". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.184217 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" (UID: "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225607 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpdp4\" (UniqueName: \"kubernetes.io/projected/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-kube-api-access-hpdp4\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225663 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rs49\" (UniqueName: \"kubernetes.io/projected/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-kube-api-access-4rs49\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225678 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225691 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225705 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225718 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225730 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.225741 4983 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.242984 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data" (OuterVolumeSpecName: "config-data") pod "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" (UID: "2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.330092 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:44:58 crc kubenswrapper[4983]: W1125 20:44:58.388461 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b2fefe1_596f_4e7c_8de9_b3c019ed40ea.slice/crio-b4364bfcf631c562afc2fb2ff6b666e9b25211e556194b127cc8fc0aad1efc70 WatchSource:0}: Error finding container b4364bfcf631c562afc2fb2ff6b666e9b25211e556194b127cc8fc0aad1efc70: Status 404 returned error can't find the container with id b4364bfcf631c562afc2fb2ff6b666e9b25211e556194b127cc8fc0aad1efc70 Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.401087 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.985520 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea","Type":"ContainerStarted","Data":"b4364bfcf631c562afc2fb2ff6b666e9b25211e556194b127cc8fc0aad1efc70"} Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.988234 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.997006 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.998289 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2","Type":"ContainerDied","Data":"20bc3a86648fe25cce386322345df182e5a152b602a1a334a127518028e398e7"} Nov 25 20:44:58 crc kubenswrapper[4983]: I1125 20:44:58.998352 4983 scope.go:117] "RemoveContainer" containerID="16bd972803d7eb018427c7f12b6bf4254b3289e22de4aedfb9240a33b0d0d09d" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.014923 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" podUID="3b2fefe1-596f-4e7c-8de9-b3c019ed40ea" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.044822 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.072022 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.072982 4983 scope.go:117] "RemoveContainer" containerID="e44008a3f8d4c6b8e6ee7a31c47a1aecf35016109b75fe7ecd3c72c2ca0bce49" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.088941 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:59 crc kubenswrapper[4983]: E1125 20:44:59.089446 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="probe" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.089465 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="probe" Nov 25 20:44:59 crc kubenswrapper[4983]: E1125 20:44:59.089483 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="cinder-scheduler" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.089488 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="cinder-scheduler" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.089693 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="probe" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.089716 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" containerName="cinder-scheduler" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.090753 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.095244 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.098645 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.250452 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.250584 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9rg9\" (UniqueName: \"kubernetes.io/projected/cb5e604e-0461-4f5c-acd3-412096243892-kube-api-access-c9rg9\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.250614 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.250667 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.250804 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb5e604e-0461-4f5c-acd3-412096243892-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.250933 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353175 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353233 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353318 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9rg9\" (UniqueName: \"kubernetes.io/projected/cb5e604e-0461-4f5c-acd3-412096243892-kube-api-access-c9rg9\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353336 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353387 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353418 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb5e604e-0461-4f5c-acd3-412096243892-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.353498 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cb5e604e-0461-4f5c-acd3-412096243892-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.359544 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-scripts\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.359802 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.368454 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.375326 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5e604e-0461-4f5c-acd3-412096243892-config-data\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.379161 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9rg9\" (UniqueName: \"kubernetes.io/projected/cb5e604e-0461-4f5c-acd3-412096243892-kube-api-access-c9rg9\") pod \"cinder-scheduler-0\" (UID: \"cb5e604e-0461-4f5c-acd3-412096243892\") " pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.427058 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.622317 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2" path="/var/lib/kubelet/pods/2c69ca86-c8b5-4079-aae9-5a6f14eb7ee2/volumes" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.623375 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffceaf3f-86a3-414f-b2eb-3a730bbdc96a" path="/var/lib/kubelet/pods/ffceaf3f-86a3-414f-b2eb-3a730bbdc96a/volumes" Nov 25 20:44:59 crc kubenswrapper[4983]: I1125 20:44:59.954291 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 20:44:59 crc kubenswrapper[4983]: W1125 20:44:59.959389 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb5e604e_0461_4f5c_acd3_412096243892.slice/crio-3684631a8ca775deedfe8e49485965373228a8ab23dda6940daef56e23bf1f6e WatchSource:0}: Error finding container 3684631a8ca775deedfe8e49485965373228a8ab23dda6940daef56e23bf1f6e: Status 404 returned error can't find the container with id 3684631a8ca775deedfe8e49485965373228a8ab23dda6940daef56e23bf1f6e Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.001512 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb5e604e-0461-4f5c-acd3-412096243892","Type":"ContainerStarted","Data":"3684631a8ca775deedfe8e49485965373228a8ab23dda6940daef56e23bf1f6e"} Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.134060 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck"] Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.136111 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.144657 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.144894 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.156827 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck"] Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.273930 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/361000b8-fec7-4af1-8453-05a888ce3db9-secret-volume\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.274428 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/361000b8-fec7-4af1-8453-05a888ce3db9-config-volume\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.274480 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbpqm\" (UniqueName: \"kubernetes.io/projected/361000b8-fec7-4af1-8453-05a888ce3db9-kube-api-access-cbpqm\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.375927 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbpqm\" (UniqueName: \"kubernetes.io/projected/361000b8-fec7-4af1-8453-05a888ce3db9-kube-api-access-cbpqm\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.376293 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/361000b8-fec7-4af1-8453-05a888ce3db9-secret-volume\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.376481 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/361000b8-fec7-4af1-8453-05a888ce3db9-config-volume\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.377485 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/361000b8-fec7-4af1-8453-05a888ce3db9-config-volume\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.383233 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/361000b8-fec7-4af1-8453-05a888ce3db9-secret-volume\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.398309 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbpqm\" (UniqueName: \"kubernetes.io/projected/361000b8-fec7-4af1-8453-05a888ce3db9-kube-api-access-cbpqm\") pod \"collect-profiles-29401725-zw2ck\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.461953 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:00 crc kubenswrapper[4983]: I1125 20:45:00.928375 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck"] Nov 25 20:45:01 crc kubenswrapper[4983]: I1125 20:45:01.015262 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" event={"ID":"361000b8-fec7-4af1-8453-05a888ce3db9","Type":"ContainerStarted","Data":"01e24c433ba5a48e1dee2742fee2fa29781faf033f83a033d7dc07829a930e75"} Nov 25 20:45:01 crc kubenswrapper[4983]: I1125 20:45:01.017751 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb5e604e-0461-4f5c-acd3-412096243892","Type":"ContainerStarted","Data":"da9dc5acb1e03f3e174b9312039333231735a2f2050ddd1ab7fcc89f17023ac6"} Nov 25 20:45:01 crc kubenswrapper[4983]: I1125 20:45:01.518130 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.031008 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cb5e604e-0461-4f5c-acd3-412096243892","Type":"ContainerStarted","Data":"4444bf800d3cec6c54bbc0b198de77b4b148920f09b7ab1719b5e70f63b50560"} Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.033144 4983 generic.go:334] "Generic (PLEG): container finished" podID="361000b8-fec7-4af1-8453-05a888ce3db9" containerID="7dc0cddef90b976b9ecb97d3ea1c4e9a30fc781e08b00d6fd71aa3c548ebb8f3" exitCode=0 Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.033184 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" event={"ID":"361000b8-fec7-4af1-8453-05a888ce3db9","Type":"ContainerDied","Data":"7dc0cddef90b976b9ecb97d3ea1c4e9a30fc781e08b00d6fd71aa3c548ebb8f3"} Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.068421 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.068400777 podStartE2EDuration="3.068400777s" podCreationTimestamp="2025-11-25 20:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:02.065509041 +0000 UTC m=+1083.178042433" watchObservedRunningTime="2025-11-25 20:45:02.068400777 +0000 UTC m=+1083.180934169" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.111300 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6466c6df55-xffs5"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.113059 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.118221 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.118468 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.134152 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.136036 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6466c6df55-xffs5"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.219670 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-combined-ca-bundle\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.220062 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-config-data\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.220279 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85v7t\" (UniqueName: \"kubernetes.io/projected/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-kube-api-access-85v7t\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.220692 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-internal-tls-certs\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.220776 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-run-httpd\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.220845 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-log-httpd\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.220938 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-public-tls-certs\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.221050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-etc-swift\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322465 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85v7t\" (UniqueName: \"kubernetes.io/projected/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-kube-api-access-85v7t\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322546 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-internal-tls-certs\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322580 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-run-httpd\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322605 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-log-httpd\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322640 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-public-tls-certs\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322686 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-etc-swift\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322715 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-combined-ca-bundle\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.322741 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-config-data\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.323073 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-run-httpd\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.325230 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-log-httpd\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.333300 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-combined-ca-bundle\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.335536 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-internal-tls-certs\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.337626 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-public-tls-certs\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.341167 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-config-data\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.352598 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-etc-swift\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.378260 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85v7t\" (UniqueName: \"kubernetes.io/projected/b05ecf5f-8220-40f4-b459-27d2dd7c6fbf-kube-api-access-85v7t\") pod \"swift-proxy-6466c6df55-xffs5\" (UID: \"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf\") " pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.461375 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-7brgc"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.462753 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.464390 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.518341 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7brgc"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.565313 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-p6qg5"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.566606 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.572322 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-p6qg5"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.637076 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d8435b-daff-48b1-848a-c846eddae231-operator-scripts\") pod \"nova-api-db-create-7brgc\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.638039 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtsj\" (UniqueName: \"kubernetes.io/projected/85d8435b-daff-48b1-848a-c846eddae231-kube-api-access-xwtsj\") pod \"nova-api-db-create-7brgc\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.666926 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-9595-account-create-update-9qjxp"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.670736 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.674385 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.681238 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9595-account-create-update-9qjxp"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.745498 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d8435b-daff-48b1-848a-c846eddae231-operator-scripts\") pod \"nova-api-db-create-7brgc\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.753054 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwtsj\" (UniqueName: \"kubernetes.io/projected/85d8435b-daff-48b1-848a-c846eddae231-kube-api-access-xwtsj\") pod \"nova-api-db-create-7brgc\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.753125 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35da62d7-c131-4115-9e20-9d412832b067-operator-scripts\") pod \"nova-cell0-db-create-p6qg5\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.753172 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6qz\" (UniqueName: \"kubernetes.io/projected/35da62d7-c131-4115-9e20-9d412832b067-kube-api-access-jf6qz\") pod \"nova-cell0-db-create-p6qg5\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.753250 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d8435b-daff-48b1-848a-c846eddae231-operator-scripts\") pod \"nova-api-db-create-7brgc\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.774398 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pmc84"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.775724 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.792916 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwtsj\" (UniqueName: \"kubernetes.io/projected/85d8435b-daff-48b1-848a-c846eddae231-kube-api-access-xwtsj\") pod \"nova-api-db-create-7brgc\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.797297 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pmc84"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.857358 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfdsz\" (UniqueName: \"kubernetes.io/projected/729f84c4-c7cb-446d-b881-187f884dfe16-kube-api-access-qfdsz\") pod \"nova-api-9595-account-create-update-9qjxp\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.857968 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrtx\" (UniqueName: \"kubernetes.io/projected/ff314c7e-be05-483d-ac0e-7cccbd562ac4-kube-api-access-hdrtx\") pod \"nova-cell1-db-create-pmc84\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.858024 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35da62d7-c131-4115-9e20-9d412832b067-operator-scripts\") pod \"nova-cell0-db-create-p6qg5\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.858055 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6qz\" (UniqueName: \"kubernetes.io/projected/35da62d7-c131-4115-9e20-9d412832b067-kube-api-access-jf6qz\") pod \"nova-cell0-db-create-p6qg5\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.858092 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729f84c4-c7cb-446d-b881-187f884dfe16-operator-scripts\") pod \"nova-api-9595-account-create-update-9qjxp\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.858126 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff314c7e-be05-483d-ac0e-7cccbd562ac4-operator-scripts\") pod \"nova-cell1-db-create-pmc84\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.864287 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35da62d7-c131-4115-9e20-9d412832b067-operator-scripts\") pod \"nova-cell0-db-create-p6qg5\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.885938 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.891008 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6qz\" (UniqueName: \"kubernetes.io/projected/35da62d7-c131-4115-9e20-9d412832b067-kube-api-access-jf6qz\") pod \"nova-cell0-db-create-p6qg5\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.896052 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-75cd-account-create-update-c6gvt"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.904818 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.914403 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-75cd-account-create-update-c6gvt"] Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.918250 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.926649 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.959024 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff314c7e-be05-483d-ac0e-7cccbd562ac4-operator-scripts\") pod \"nova-cell1-db-create-pmc84\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.959300 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-operator-scripts\") pod \"nova-cell0-75cd-account-create-update-c6gvt\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.959455 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5vz\" (UniqueName: \"kubernetes.io/projected/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-kube-api-access-8p5vz\") pod \"nova-cell0-75cd-account-create-update-c6gvt\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.959578 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfdsz\" (UniqueName: \"kubernetes.io/projected/729f84c4-c7cb-446d-b881-187f884dfe16-kube-api-access-qfdsz\") pod \"nova-api-9595-account-create-update-9qjxp\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.959675 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdrtx\" (UniqueName: \"kubernetes.io/projected/ff314c7e-be05-483d-ac0e-7cccbd562ac4-kube-api-access-hdrtx\") pod \"nova-cell1-db-create-pmc84\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.959782 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729f84c4-c7cb-446d-b881-187f884dfe16-operator-scripts\") pod \"nova-api-9595-account-create-update-9qjxp\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.960344 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff314c7e-be05-483d-ac0e-7cccbd562ac4-operator-scripts\") pod \"nova-cell1-db-create-pmc84\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.960633 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729f84c4-c7cb-446d-b881-187f884dfe16-operator-scripts\") pod \"nova-api-9595-account-create-update-9qjxp\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.980945 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdrtx\" (UniqueName: \"kubernetes.io/projected/ff314c7e-be05-483d-ac0e-7cccbd562ac4-kube-api-access-hdrtx\") pod \"nova-cell1-db-create-pmc84\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.982815 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfdsz\" (UniqueName: \"kubernetes.io/projected/729f84c4-c7cb-446d-b881-187f884dfe16-kube-api-access-qfdsz\") pod \"nova-api-9595-account-create-update-9qjxp\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:02 crc kubenswrapper[4983]: I1125 20:45:02.996925 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.063614 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-operator-scripts\") pod \"nova-cell0-75cd-account-create-update-c6gvt\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.063983 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5vz\" (UniqueName: \"kubernetes.io/projected/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-kube-api-access-8p5vz\") pod \"nova-cell0-75cd-account-create-update-c6gvt\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.065581 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-operator-scripts\") pod \"nova-cell0-75cd-account-create-update-c6gvt\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.070785 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f1c8-account-create-update-ck2cx"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.073324 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.081866 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f1c8-account-create-update-ck2cx"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.082422 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.088500 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5vz\" (UniqueName: \"kubernetes.io/projected/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-kube-api-access-8p5vz\") pod \"nova-cell0-75cd-account-create-update-c6gvt\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.157244 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.166253 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqbw2\" (UniqueName: \"kubernetes.io/projected/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-kube-api-access-tqbw2\") pod \"nova-cell1-f1c8-account-create-update-ck2cx\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.166956 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-operator-scripts\") pod \"nova-cell1-f1c8-account-create-update-ck2cx\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.213170 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6466c6df55-xffs5"] Nov 25 20:45:03 crc kubenswrapper[4983]: W1125 20:45:03.258754 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb05ecf5f_8220_40f4_b459_27d2dd7c6fbf.slice/crio-9b63a3712f61e2e73d3b8b8bb150dea6250e53637eff1773d34ba2b6372b4ca5 WatchSource:0}: Error finding container 9b63a3712f61e2e73d3b8b8bb150dea6250e53637eff1773d34ba2b6372b4ca5: Status 404 returned error can't find the container with id 9b63a3712f61e2e73d3b8b8bb150dea6250e53637eff1773d34ba2b6372b4ca5 Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.268244 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqbw2\" (UniqueName: \"kubernetes.io/projected/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-kube-api-access-tqbw2\") pod \"nova-cell1-f1c8-account-create-update-ck2cx\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.268457 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-operator-scripts\") pod \"nova-cell1-f1c8-account-create-update-ck2cx\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.269125 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-operator-scripts\") pod \"nova-cell1-f1c8-account-create-update-ck2cx\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.284990 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.287321 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqbw2\" (UniqueName: \"kubernetes.io/projected/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-kube-api-access-tqbw2\") pod \"nova-cell1-f1c8-account-create-update-ck2cx\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.417095 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.482882 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-7brgc"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.644890 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-p6qg5"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.654368 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.744491 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-9595-account-create-update-9qjxp"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.786336 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/361000b8-fec7-4af1-8453-05a888ce3db9-config-volume\") pod \"361000b8-fec7-4af1-8453-05a888ce3db9\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.786596 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/361000b8-fec7-4af1-8453-05a888ce3db9-secret-volume\") pod \"361000b8-fec7-4af1-8453-05a888ce3db9\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.786672 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbpqm\" (UniqueName: \"kubernetes.io/projected/361000b8-fec7-4af1-8453-05a888ce3db9-kube-api-access-cbpqm\") pod \"361000b8-fec7-4af1-8453-05a888ce3db9\" (UID: \"361000b8-fec7-4af1-8453-05a888ce3db9\") " Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.787534 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/361000b8-fec7-4af1-8453-05a888ce3db9-config-volume" (OuterVolumeSpecName: "config-volume") pod "361000b8-fec7-4af1-8453-05a888ce3db9" (UID: "361000b8-fec7-4af1-8453-05a888ce3db9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.793875 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/361000b8-fec7-4af1-8453-05a888ce3db9-kube-api-access-cbpqm" (OuterVolumeSpecName: "kube-api-access-cbpqm") pod "361000b8-fec7-4af1-8453-05a888ce3db9" (UID: "361000b8-fec7-4af1-8453-05a888ce3db9"). InnerVolumeSpecName "kube-api-access-cbpqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.804861 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/361000b8-fec7-4af1-8453-05a888ce3db9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "361000b8-fec7-4af1-8453-05a888ce3db9" (UID: "361000b8-fec7-4af1-8453-05a888ce3db9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.819654 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.820338 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-central-agent" containerID="cri-o://6763ee7940cbafd1392877d8adf31c8371a328ff766be19e31cba1f8a9b4554b" gracePeriod=30 Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.820855 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="proxy-httpd" containerID="cri-o://fd864ebbd42773e691617327de00f7b1f14e7bd41c00ed75fe64da74ff149809" gracePeriod=30 Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.821029 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-notification-agent" containerID="cri-o://9e9f64c133c3b4995305f553ab463f5a4bdd79548a19e8e9596e93fc36812d6c" gracePeriod=30 Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.821078 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="sg-core" containerID="cri-o://4b620fb54090493d73e8f9746b45e4dfcd9711a21aedcacf63a3f75ee8336e82" gracePeriod=30 Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.862085 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pmc84"] Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.889246 4983 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/361000b8-fec7-4af1-8453-05a888ce3db9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.889293 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbpqm\" (UniqueName: \"kubernetes.io/projected/361000b8-fec7-4af1-8453-05a888ce3db9-kube-api-access-cbpqm\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.889312 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/361000b8-fec7-4af1-8453-05a888ce3db9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:03 crc kubenswrapper[4983]: I1125 20:45:03.927715 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.164:3000/\": read tcp 10.217.0.2:60818->10.217.0.164:3000: read: connection reset by peer" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.026639 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-75cd-account-create-update-c6gvt"] Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.034277 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f1c8-account-create-update-ck2cx"] Nov 25 20:45:04 crc kubenswrapper[4983]: W1125 20:45:04.036386 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcbeee6c_1328_4f1e_a53c_ba2f245620e6.slice/crio-ce38945d572fd67e30a600c039e94be88a5a1c164f097b25d1c4b16128ac7748 WatchSource:0}: Error finding container ce38945d572fd67e30a600c039e94be88a5a1c164f097b25d1c4b16128ac7748: Status 404 returned error can't find the container with id ce38945d572fd67e30a600c039e94be88a5a1c164f097b25d1c4b16128ac7748 Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.111045 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7brgc" event={"ID":"85d8435b-daff-48b1-848a-c846eddae231","Type":"ContainerStarted","Data":"4c1311917ff5b05a49a4bceab6a8e15b2440be05d7516aa4f61b1325671b1c9a"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.111094 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7brgc" event={"ID":"85d8435b-daff-48b1-848a-c846eddae231","Type":"ContainerStarted","Data":"e6caf20f05cbdad7c669d9202c54dd4d37c4bc26301068b2f58ddfa658379114"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.121169 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" event={"ID":"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9","Type":"ContainerStarted","Data":"dd230d88b77c3b1fb95f6061c8b4dc8d6161c0c8c1e469cbbe4a6314659c5980"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.128329 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6466c6df55-xffs5" event={"ID":"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf","Type":"ContainerStarted","Data":"fc166b747606810df02b3204950af5003528def90fb312b325bc4a9f33304f3c"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.128381 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6466c6df55-xffs5" event={"ID":"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf","Type":"ContainerStarted","Data":"9b63a3712f61e2e73d3b8b8bb150dea6250e53637eff1773d34ba2b6372b4ca5"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.129613 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pmc84" event={"ID":"ff314c7e-be05-483d-ac0e-7cccbd562ac4","Type":"ContainerStarted","Data":"5dd689c638cfc355d9293014c5f9d0830e795028f2d43d93689f1f7cc39b85f5"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.138568 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p6qg5" event={"ID":"35da62d7-c131-4115-9e20-9d412832b067","Type":"ContainerStarted","Data":"113a01566d9c9601471efe0ccdbc6764dafd000b685623f477017d0f16c90945"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.138626 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p6qg5" event={"ID":"35da62d7-c131-4115-9e20-9d412832b067","Type":"ContainerStarted","Data":"6397deadc61befd82153019a6be8a739776f15edfd9b1e7db5c32f2d88e2e7e5"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.143163 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-7brgc" podStartSLOduration=2.143132901 podStartE2EDuration="2.143132901s" podCreationTimestamp="2025-11-25 20:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:04.12687253 +0000 UTC m=+1085.239405922" watchObservedRunningTime="2025-11-25 20:45:04.143132901 +0000 UTC m=+1085.255666293" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.156234 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" event={"ID":"fcbeee6c-1328-4f1e-a53c-ba2f245620e6","Type":"ContainerStarted","Data":"ce38945d572fd67e30a600c039e94be88a5a1c164f097b25d1c4b16128ac7748"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.156870 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-pmc84" podStartSLOduration=2.156836494 podStartE2EDuration="2.156836494s" podCreationTimestamp="2025-11-25 20:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:04.156101444 +0000 UTC m=+1085.268634836" watchObservedRunningTime="2025-11-25 20:45:04.156836494 +0000 UTC m=+1085.269369886" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.162801 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9595-account-create-update-9qjxp" event={"ID":"729f84c4-c7cb-446d-b881-187f884dfe16","Type":"ContainerStarted","Data":"48821378d060b6bb6a864bf11e6032ff75d7f9269c07c2bfcc7f3106f7185938"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.162871 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9595-account-create-update-9qjxp" event={"ID":"729f84c4-c7cb-446d-b881-187f884dfe16","Type":"ContainerStarted","Data":"87dac031750b7345acf087a4b75108b152207724e35242f780a0c710e95e95aa"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.174042 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" event={"ID":"361000b8-fec7-4af1-8453-05a888ce3db9","Type":"ContainerDied","Data":"01e24c433ba5a48e1dee2742fee2fa29781faf033f83a033d7dc07829a930e75"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.174099 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e24c433ba5a48e1dee2742fee2fa29781faf033f83a033d7dc07829a930e75" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.174206 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.183287 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-p6qg5" podStartSLOduration=2.183264973 podStartE2EDuration="2.183264973s" podCreationTimestamp="2025-11-25 20:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:04.169466858 +0000 UTC m=+1085.282000260" watchObservedRunningTime="2025-11-25 20:45:04.183264973 +0000 UTC m=+1085.295798365" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.195474 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-9595-account-create-update-9qjxp" podStartSLOduration=2.195453436 podStartE2EDuration="2.195453436s" podCreationTimestamp="2025-11-25 20:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:04.193285109 +0000 UTC m=+1085.305818501" watchObservedRunningTime="2025-11-25 20:45:04.195453436 +0000 UTC m=+1085.307986828" Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.197775 4983 generic.go:334] "Generic (PLEG): container finished" podID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerID="fd864ebbd42773e691617327de00f7b1f14e7bd41c00ed75fe64da74ff149809" exitCode=0 Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.197810 4983 generic.go:334] "Generic (PLEG): container finished" podID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerID="4b620fb54090493d73e8f9746b45e4dfcd9711a21aedcacf63a3f75ee8336e82" exitCode=2 Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.197838 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerDied","Data":"fd864ebbd42773e691617327de00f7b1f14e7bd41c00ed75fe64da74ff149809"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.197870 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerDied","Data":"4b620fb54090493d73e8f9746b45e4dfcd9711a21aedcacf63a3f75ee8336e82"} Nov 25 20:45:04 crc kubenswrapper[4983]: I1125 20:45:04.427786 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.211855 4983 generic.go:334] "Generic (PLEG): container finished" podID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerID="6763ee7940cbafd1392877d8adf31c8371a328ff766be19e31cba1f8a9b4554b" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.211942 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerDied","Data":"6763ee7940cbafd1392877d8adf31c8371a328ff766be19e31cba1f8a9b4554b"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.214783 4983 generic.go:334] "Generic (PLEG): container finished" podID="85d8435b-daff-48b1-848a-c846eddae231" containerID="4c1311917ff5b05a49a4bceab6a8e15b2440be05d7516aa4f61b1325671b1c9a" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.214840 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7brgc" event={"ID":"85d8435b-daff-48b1-848a-c846eddae231","Type":"ContainerDied","Data":"4c1311917ff5b05a49a4bceab6a8e15b2440be05d7516aa4f61b1325671b1c9a"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.219500 4983 generic.go:334] "Generic (PLEG): container finished" podID="65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" containerID="8697d144e10f252a712b92f758f2f90109cb1189ac8f408ec56c30a80c22ca34" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.219731 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" event={"ID":"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9","Type":"ContainerDied","Data":"8697d144e10f252a712b92f758f2f90109cb1189ac8f408ec56c30a80c22ca34"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.222181 4983 generic.go:334] "Generic (PLEG): container finished" podID="fcbeee6c-1328-4f1e-a53c-ba2f245620e6" containerID="1e923a438d7eafb4d9d3c08715c5e9d1e285c8932454a55c0ef794398df933b4" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.222263 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" event={"ID":"fcbeee6c-1328-4f1e-a53c-ba2f245620e6","Type":"ContainerDied","Data":"1e923a438d7eafb4d9d3c08715c5e9d1e285c8932454a55c0ef794398df933b4"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.224515 4983 generic.go:334] "Generic (PLEG): container finished" podID="729f84c4-c7cb-446d-b881-187f884dfe16" containerID="48821378d060b6bb6a864bf11e6032ff75d7f9269c07c2bfcc7f3106f7185938" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.224697 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9595-account-create-update-9qjxp" event={"ID":"729f84c4-c7cb-446d-b881-187f884dfe16","Type":"ContainerDied","Data":"48821378d060b6bb6a864bf11e6032ff75d7f9269c07c2bfcc7f3106f7185938"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.232847 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6466c6df55-xffs5" event={"ID":"b05ecf5f-8220-40f4-b459-27d2dd7c6fbf","Type":"ContainerStarted","Data":"4bba1bcfff5d7b029d7326a5ad1a69426c95801561f8880b13760ef4fea41546"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.233785 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.233823 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.240916 4983 generic.go:334] "Generic (PLEG): container finished" podID="ff314c7e-be05-483d-ac0e-7cccbd562ac4" containerID="ec64a40892fe78b1305bb6754abc4d497590ae4dbe914a7a084c6c27d7eae9cd" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.240996 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pmc84" event={"ID":"ff314c7e-be05-483d-ac0e-7cccbd562ac4","Type":"ContainerDied","Data":"ec64a40892fe78b1305bb6754abc4d497590ae4dbe914a7a084c6c27d7eae9cd"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.251362 4983 generic.go:334] "Generic (PLEG): container finished" podID="35da62d7-c131-4115-9e20-9d412832b067" containerID="113a01566d9c9601471efe0ccdbc6764dafd000b685623f477017d0f16c90945" exitCode=0 Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.251430 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p6qg5" event={"ID":"35da62d7-c131-4115-9e20-9d412832b067","Type":"ContainerDied","Data":"113a01566d9c9601471efe0ccdbc6764dafd000b685623f477017d0f16c90945"} Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.341476 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6466c6df55-xffs5" podStartSLOduration=3.34145487 podStartE2EDuration="3.34145487s" podCreationTimestamp="2025-11-25 20:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:05.334569138 +0000 UTC m=+1086.447102540" watchObservedRunningTime="2025-11-25 20:45:05.34145487 +0000 UTC m=+1086.453988262" Nov 25 20:45:05 crc kubenswrapper[4983]: I1125 20:45:05.657857 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Nov 25 20:45:06 crc kubenswrapper[4983]: I1125 20:45:06.269492 4983 generic.go:334] "Generic (PLEG): container finished" podID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerID="9e9f64c133c3b4995305f553ab463f5a4bdd79548a19e8e9596e93fc36812d6c" exitCode=0 Nov 25 20:45:06 crc kubenswrapper[4983]: I1125 20:45:06.269649 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerDied","Data":"9e9f64c133c3b4995305f553ab463f5a4bdd79548a19e8e9596e93fc36812d6c"} Nov 25 20:45:09 crc kubenswrapper[4983]: I1125 20:45:09.656095 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.430921 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.164:3000/\": dial tcp 10.217.0.164:3000: connect: connection refused" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.866923 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.885801 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.892743 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.898598 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.916675 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.927195 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:10 crc kubenswrapper[4983]: I1125 20:45:10.941283 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.042753 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-combined-ca-bundle\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.042812 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfdsz\" (UniqueName: \"kubernetes.io/projected/729f84c4-c7cb-446d-b881-187f884dfe16-kube-api-access-qfdsz\") pod \"729f84c4-c7cb-446d-b881-187f884dfe16\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.042866 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-operator-scripts\") pod \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.042890 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-config-data\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.042933 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-run-httpd\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.042986 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff314c7e-be05-483d-ac0e-7cccbd562ac4-operator-scripts\") pod \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043015 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p5vz\" (UniqueName: \"kubernetes.io/projected/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-kube-api-access-8p5vz\") pod \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\" (UID: \"fcbeee6c-1328-4f1e-a53c-ba2f245620e6\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043036 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d8435b-daff-48b1-848a-c846eddae231-operator-scripts\") pod \"85d8435b-daff-48b1-848a-c846eddae231\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043090 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqbw2\" (UniqueName: \"kubernetes.io/projected/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-kube-api-access-tqbw2\") pod \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043139 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf6qz\" (UniqueName: \"kubernetes.io/projected/35da62d7-c131-4115-9e20-9d412832b067-kube-api-access-jf6qz\") pod \"35da62d7-c131-4115-9e20-9d412832b067\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043163 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-scripts\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043210 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtsj\" (UniqueName: \"kubernetes.io/projected/85d8435b-daff-48b1-848a-c846eddae231-kube-api-access-xwtsj\") pod \"85d8435b-daff-48b1-848a-c846eddae231\" (UID: \"85d8435b-daff-48b1-848a-c846eddae231\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043235 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-log-httpd\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043260 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729f84c4-c7cb-446d-b881-187f884dfe16-operator-scripts\") pod \"729f84c4-c7cb-446d-b881-187f884dfe16\" (UID: \"729f84c4-c7cb-446d-b881-187f884dfe16\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043275 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35da62d7-c131-4115-9e20-9d412832b067-operator-scripts\") pod \"35da62d7-c131-4115-9e20-9d412832b067\" (UID: \"35da62d7-c131-4115-9e20-9d412832b067\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043292 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdrtx\" (UniqueName: \"kubernetes.io/projected/ff314c7e-be05-483d-ac0e-7cccbd562ac4-kube-api-access-hdrtx\") pod \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\" (UID: \"ff314c7e-be05-483d-ac0e-7cccbd562ac4\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043309 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l74ss\" (UniqueName: \"kubernetes.io/projected/ce09d39b-1687-45a5-877c-a8e12876b41d-kube-api-access-l74ss\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043338 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-sg-core-conf-yaml\") pod \"ce09d39b-1687-45a5-877c-a8e12876b41d\" (UID: \"ce09d39b-1687-45a5-877c-a8e12876b41d\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.043376 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-operator-scripts\") pod \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\" (UID: \"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9\") " Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.044149 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" (UID: "65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.044548 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35da62d7-c131-4115-9e20-9d412832b067-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35da62d7-c131-4115-9e20-9d412832b067" (UID: "35da62d7-c131-4115-9e20-9d412832b067"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.044878 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729f84c4-c7cb-446d-b881-187f884dfe16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "729f84c4-c7cb-446d-b881-187f884dfe16" (UID: "729f84c4-c7cb-446d-b881-187f884dfe16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.046283 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff314c7e-be05-483d-ac0e-7cccbd562ac4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff314c7e-be05-483d-ac0e-7cccbd562ac4" (UID: "ff314c7e-be05-483d-ac0e-7cccbd562ac4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.048827 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.049021 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d8435b-daff-48b1-848a-c846eddae231-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85d8435b-daff-48b1-848a-c846eddae231" (UID: "85d8435b-daff-48b1-848a-c846eddae231"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.049273 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35da62d7-c131-4115-9e20-9d412832b067-kube-api-access-jf6qz" (OuterVolumeSpecName: "kube-api-access-jf6qz") pod "35da62d7-c131-4115-9e20-9d412832b067" (UID: "35da62d7-c131-4115-9e20-9d412832b067"). InnerVolumeSpecName "kube-api-access-jf6qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.049324 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff314c7e-be05-483d-ac0e-7cccbd562ac4-kube-api-access-hdrtx" (OuterVolumeSpecName: "kube-api-access-hdrtx") pod "ff314c7e-be05-483d-ac0e-7cccbd562ac4" (UID: "ff314c7e-be05-483d-ac0e-7cccbd562ac4"). InnerVolumeSpecName "kube-api-access-hdrtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.049570 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcbeee6c-1328-4f1e-a53c-ba2f245620e6" (UID: "fcbeee6c-1328-4f1e-a53c-ba2f245620e6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.050205 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.050881 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d8435b-daff-48b1-848a-c846eddae231-kube-api-access-xwtsj" (OuterVolumeSpecName: "kube-api-access-xwtsj") pod "85d8435b-daff-48b1-848a-c846eddae231" (UID: "85d8435b-daff-48b1-848a-c846eddae231"). InnerVolumeSpecName "kube-api-access-xwtsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.051398 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729f84c4-c7cb-446d-b881-187f884dfe16-kube-api-access-qfdsz" (OuterVolumeSpecName: "kube-api-access-qfdsz") pod "729f84c4-c7cb-446d-b881-187f884dfe16" (UID: "729f84c4-c7cb-446d-b881-187f884dfe16"). InnerVolumeSpecName "kube-api-access-qfdsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.052529 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce09d39b-1687-45a5-877c-a8e12876b41d-kube-api-access-l74ss" (OuterVolumeSpecName: "kube-api-access-l74ss") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "kube-api-access-l74ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.052653 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-kube-api-access-tqbw2" (OuterVolumeSpecName: "kube-api-access-tqbw2") pod "65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" (UID: "65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9"). InnerVolumeSpecName "kube-api-access-tqbw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.052712 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-kube-api-access-8p5vz" (OuterVolumeSpecName: "kube-api-access-8p5vz") pod "fcbeee6c-1328-4f1e-a53c-ba2f245620e6" (UID: "fcbeee6c-1328-4f1e-a53c-ba2f245620e6"). InnerVolumeSpecName "kube-api-access-8p5vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.055535 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-scripts" (OuterVolumeSpecName: "scripts") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.076671 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.144575 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146281 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqbw2\" (UniqueName: \"kubernetes.io/projected/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-kube-api-access-tqbw2\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146313 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf6qz\" (UniqueName: \"kubernetes.io/projected/35da62d7-c131-4115-9e20-9d412832b067-kube-api-access-jf6qz\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146324 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146334 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwtsj\" (UniqueName: \"kubernetes.io/projected/85d8435b-daff-48b1-848a-c846eddae231-kube-api-access-xwtsj\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146345 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146380 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/729f84c4-c7cb-446d-b881-187f884dfe16-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146389 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35da62d7-c131-4115-9e20-9d412832b067-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146398 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdrtx\" (UniqueName: \"kubernetes.io/projected/ff314c7e-be05-483d-ac0e-7cccbd562ac4-kube-api-access-hdrtx\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146407 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l74ss\" (UniqueName: \"kubernetes.io/projected/ce09d39b-1687-45a5-877c-a8e12876b41d-kube-api-access-l74ss\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146415 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146424 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146432 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146441 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfdsz\" (UniqueName: \"kubernetes.io/projected/729f84c4-c7cb-446d-b881-187f884dfe16-kube-api-access-qfdsz\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146450 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146458 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce09d39b-1687-45a5-877c-a8e12876b41d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146467 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff314c7e-be05-483d-ac0e-7cccbd562ac4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146475 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p5vz\" (UniqueName: \"kubernetes.io/projected/fcbeee6c-1328-4f1e-a53c-ba2f245620e6-kube-api-access-8p5vz\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.146484 4983 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85d8435b-daff-48b1-848a-c846eddae231-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.170310 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-config-data" (OuterVolumeSpecName: "config-data") pod "ce09d39b-1687-45a5-877c-a8e12876b41d" (UID: "ce09d39b-1687-45a5-877c-a8e12876b41d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.248443 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce09d39b-1687-45a5-877c-a8e12876b41d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.319734 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pmc84" event={"ID":"ff314c7e-be05-483d-ac0e-7cccbd562ac4","Type":"ContainerDied","Data":"5dd689c638cfc355d9293014c5f9d0830e795028f2d43d93689f1f7cc39b85f5"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.319789 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd689c638cfc355d9293014c5f9d0830e795028f2d43d93689f1f7cc39b85f5" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.319753 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pmc84" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.324545 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-p6qg5" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.324540 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-p6qg5" event={"ID":"35da62d7-c131-4115-9e20-9d412832b067","Type":"ContainerDied","Data":"6397deadc61befd82153019a6be8a739776f15edfd9b1e7db5c32f2d88e2e7e5"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.324888 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6397deadc61befd82153019a6be8a739776f15edfd9b1e7db5c32f2d88e2e7e5" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.327684 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce09d39b-1687-45a5-877c-a8e12876b41d","Type":"ContainerDied","Data":"f55ed61cae7f0002f11ae0f4f199e850120e22a5c9e41465541df3523fa47e20"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.327779 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.327884 4983 scope.go:117] "RemoveContainer" containerID="fd864ebbd42773e691617327de00f7b1f14e7bd41c00ed75fe64da74ff149809" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.329228 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3b2fefe1-596f-4e7c-8de9-b3c019ed40ea","Type":"ContainerStarted","Data":"e927c3c44d9a2e35187aab086928f59ee4b6f596d9d85614dba6c0eb0a16bda9"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.335923 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-7brgc" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.336092 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-7brgc" event={"ID":"85d8435b-daff-48b1-848a-c846eddae231","Type":"ContainerDied","Data":"e6caf20f05cbdad7c669d9202c54dd4d37c4bc26301068b2f58ddfa658379114"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.336133 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6caf20f05cbdad7c669d9202c54dd4d37c4bc26301068b2f58ddfa658379114" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.342261 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.342270 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f1c8-account-create-update-ck2cx" event={"ID":"65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9","Type":"ContainerDied","Data":"dd230d88b77c3b1fb95f6061c8b4dc8d6161c0c8c1e469cbbe4a6314659c5980"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.342303 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd230d88b77c3b1fb95f6061c8b4dc8d6161c0c8c1e469cbbe4a6314659c5980" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.345815 4983 scope.go:117] "RemoveContainer" containerID="4b620fb54090493d73e8f9746b45e4dfcd9711a21aedcacf63a3f75ee8336e82" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.350211 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" event={"ID":"fcbeee6c-1328-4f1e-a53c-ba2f245620e6","Type":"ContainerDied","Data":"ce38945d572fd67e30a600c039e94be88a5a1c164f097b25d1c4b16128ac7748"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.350254 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce38945d572fd67e30a600c039e94be88a5a1c164f097b25d1c4b16128ac7748" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.350277 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-75cd-account-create-update-c6gvt" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.352908 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-9595-account-create-update-9qjxp" event={"ID":"729f84c4-c7cb-446d-b881-187f884dfe16","Type":"ContainerDied","Data":"87dac031750b7345acf087a4b75108b152207724e35242f780a0c710e95e95aa"} Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.352944 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87dac031750b7345acf087a4b75108b152207724e35242f780a0c710e95e95aa" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.353018 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-9595-account-create-update-9qjxp" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.358380 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.113158488 podStartE2EDuration="14.358357133s" podCreationTimestamp="2025-11-25 20:44:57 +0000 UTC" firstStartedPulling="2025-11-25 20:44:58.390878345 +0000 UTC m=+1079.503411737" lastFinishedPulling="2025-11-25 20:45:10.63607699 +0000 UTC m=+1091.748610382" observedRunningTime="2025-11-25 20:45:11.346080928 +0000 UTC m=+1092.458614350" watchObservedRunningTime="2025-11-25 20:45:11.358357133 +0000 UTC m=+1092.470890525" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.389523 4983 scope.go:117] "RemoveContainer" containerID="9e9f64c133c3b4995305f553ab463f5a4bdd79548a19e8e9596e93fc36812d6c" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.404338 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.416925 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421341 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421819 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729f84c4-c7cb-446d-b881-187f884dfe16" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421840 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="729f84c4-c7cb-446d-b881-187f884dfe16" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421864 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff314c7e-be05-483d-ac0e-7cccbd562ac4" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421873 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff314c7e-be05-483d-ac0e-7cccbd562ac4" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421893 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="proxy-httpd" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421901 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="proxy-httpd" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421910 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcbeee6c-1328-4f1e-a53c-ba2f245620e6" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421917 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcbeee6c-1328-4f1e-a53c-ba2f245620e6" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421931 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361000b8-fec7-4af1-8453-05a888ce3db9" containerName="collect-profiles" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421939 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="361000b8-fec7-4af1-8453-05a888ce3db9" containerName="collect-profiles" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421958 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-notification-agent" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421966 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-notification-agent" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421972 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="sg-core" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421979 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="sg-core" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.421989 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d8435b-daff-48b1-848a-c846eddae231" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.421996 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d8435b-daff-48b1-848a-c846eddae231" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.422013 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422020 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.422035 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35da62d7-c131-4115-9e20-9d412832b067" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422040 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="35da62d7-c131-4115-9e20-9d412832b067" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: E1125 20:45:11.422049 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-central-agent" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422055 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-central-agent" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422218 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422229 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d8435b-daff-48b1-848a-c846eddae231" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422241 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff314c7e-be05-483d-ac0e-7cccbd562ac4" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422256 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-notification-agent" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422266 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="ceilometer-central-agent" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422275 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="proxy-httpd" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422287 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="35da62d7-c131-4115-9e20-9d412832b067" containerName="mariadb-database-create" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422296 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcbeee6c-1328-4f1e-a53c-ba2f245620e6" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422307 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" containerName="sg-core" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422316 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="361000b8-fec7-4af1-8453-05a888ce3db9" containerName="collect-profiles" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.422330 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="729f84c4-c7cb-446d-b881-187f884dfe16" containerName="mariadb-account-create-update" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.424033 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.432132 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.433239 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.445988 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.461297 4983 scope.go:117] "RemoveContainer" containerID="6763ee7940cbafd1392877d8adf31c8371a328ff766be19e31cba1f8a9b4554b" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.561742 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-run-httpd\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.561828 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-scripts\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.562566 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.562671 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-config-data\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.562732 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrns\" (UniqueName: \"kubernetes.io/projected/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-kube-api-access-8zrns\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.562764 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.562791 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-log-httpd\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.615215 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce09d39b-1687-45a5-877c-a8e12876b41d" path="/var/lib/kubelet/pods/ce09d39b-1687-45a5-877c-a8e12876b41d/volumes" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.664750 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.664864 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-config-data\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.664943 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zrns\" (UniqueName: \"kubernetes.io/projected/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-kube-api-access-8zrns\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.664975 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.665025 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-log-httpd\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.665094 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-run-httpd\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.665186 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-scripts\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.665882 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-run-httpd\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.666167 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-log-httpd\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.670709 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.671223 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-config-data\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.673177 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.674480 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-scripts\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.682576 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zrns\" (UniqueName: \"kubernetes.io/projected/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-kube-api-access-8zrns\") pod \"ceilometer-0\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " pod="openstack/ceilometer-0" Nov 25 20:45:11 crc kubenswrapper[4983]: I1125 20:45:11.744666 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:12 crc kubenswrapper[4983]: I1125 20:45:12.084222 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:12 crc kubenswrapper[4983]: I1125 20:45:12.244356 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:12 crc kubenswrapper[4983]: I1125 20:45:12.364166 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerStarted","Data":"72bcec2716074406da7000f91f96c55230ca72a18403fa60033941f2f7d958e5"} Nov 25 20:45:12 crc kubenswrapper[4983]: I1125 20:45:12.473528 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:12 crc kubenswrapper[4983]: I1125 20:45:12.478058 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6466c6df55-xffs5" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.227472 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6fck8"] Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.229772 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.232237 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.232406 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.233724 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2dxqn" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.257871 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6fck8"] Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.379184 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerStarted","Data":"90132a70f624701dc9f0e53a5cdda0bda39a94fdbe408f96ad69deef92f3f8e0"} Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.406406 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7q56\" (UniqueName: \"kubernetes.io/projected/06ccc433-0041-48b3-906e-8b7ff8ef57ab-kube-api-access-v7q56\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.406566 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-config-data\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.406645 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.406676 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-scripts\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.513252 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7q56\" (UniqueName: \"kubernetes.io/projected/06ccc433-0041-48b3-906e-8b7ff8ef57ab-kube-api-access-v7q56\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.514168 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-config-data\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.514280 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.514317 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-scripts\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.521628 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-scripts\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.521740 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-config-data\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.522227 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.537661 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7q56\" (UniqueName: \"kubernetes.io/projected/06ccc433-0041-48b3-906e-8b7ff8ef57ab-kube-api-access-v7q56\") pod \"nova-cell0-conductor-db-sync-6fck8\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:13 crc kubenswrapper[4983]: I1125 20:45:13.553361 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:14 crc kubenswrapper[4983]: I1125 20:45:14.147120 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6fck8"] Nov 25 20:45:14 crc kubenswrapper[4983]: W1125 20:45:14.149146 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06ccc433_0041_48b3_906e_8b7ff8ef57ab.slice/crio-b0bf33770ea5ca4fcc4c79341d28b4f79d7cc3531f6f5124e26cf611f5a3a693 WatchSource:0}: Error finding container b0bf33770ea5ca4fcc4c79341d28b4f79d7cc3531f6f5124e26cf611f5a3a693: Status 404 returned error can't find the container with id b0bf33770ea5ca4fcc4c79341d28b4f79d7cc3531f6f5124e26cf611f5a3a693 Nov 25 20:45:14 crc kubenswrapper[4983]: I1125 20:45:14.389123 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6fck8" event={"ID":"06ccc433-0041-48b3-906e-8b7ff8ef57ab","Type":"ContainerStarted","Data":"b0bf33770ea5ca4fcc4c79341d28b4f79d7cc3531f6f5124e26cf611f5a3a693"} Nov 25 20:45:14 crc kubenswrapper[4983]: I1125 20:45:14.391954 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerStarted","Data":"975306c0cd1a09c33e0405574bb510589cab3fcccf51b1e2c7027adcc2c33b21"} Nov 25 20:45:15 crc kubenswrapper[4983]: I1125 20:45:15.408037 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerStarted","Data":"9ea46cdb47f9b366567773f775e0480e15c50df2ae64150b46f0b9224dfbaaba"} Nov 25 20:45:15 crc kubenswrapper[4983]: I1125 20:45:15.658339 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-746b6775bd-26zqf" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.428110 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerStarted","Data":"45abeb2ecf1dc6059abfb445da435aa27fa3cd007301a89ae0977ec8595a0ff9"} Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.428767 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-central-agent" containerID="cri-o://90132a70f624701dc9f0e53a5cdda0bda39a94fdbe408f96ad69deef92f3f8e0" gracePeriod=30 Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.429169 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.429535 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="proxy-httpd" containerID="cri-o://45abeb2ecf1dc6059abfb445da435aa27fa3cd007301a89ae0977ec8595a0ff9" gracePeriod=30 Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.429702 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="sg-core" containerID="cri-o://9ea46cdb47f9b366567773f775e0480e15c50df2ae64150b46f0b9224dfbaaba" gracePeriod=30 Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.429752 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-notification-agent" containerID="cri-o://975306c0cd1a09c33e0405574bb510589cab3fcccf51b1e2c7027adcc2c33b21" gracePeriod=30 Nov 25 20:45:16 crc kubenswrapper[4983]: I1125 20:45:16.454749 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.07550782 podStartE2EDuration="5.454726864s" podCreationTimestamp="2025-11-25 20:45:11 +0000 UTC" firstStartedPulling="2025-11-25 20:45:12.252522759 +0000 UTC m=+1093.365056151" lastFinishedPulling="2025-11-25 20:45:15.631741803 +0000 UTC m=+1096.744275195" observedRunningTime="2025-11-25 20:45:16.448618882 +0000 UTC m=+1097.561152274" watchObservedRunningTime="2025-11-25 20:45:16.454726864 +0000 UTC m=+1097.567260256" Nov 25 20:45:16 crc kubenswrapper[4983]: E1125 20:45:16.619880 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb5b089_8340_4b71_88f9_7fcbeecb4d78.slice/crio-conmon-9ea46cdb47f9b366567773f775e0480e15c50df2ae64150b46f0b9224dfbaaba.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:45:17 crc kubenswrapper[4983]: I1125 20:45:17.441239 4983 generic.go:334] "Generic (PLEG): container finished" podID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerID="45abeb2ecf1dc6059abfb445da435aa27fa3cd007301a89ae0977ec8595a0ff9" exitCode=0 Nov 25 20:45:17 crc kubenswrapper[4983]: I1125 20:45:17.441573 4983 generic.go:334] "Generic (PLEG): container finished" podID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerID="9ea46cdb47f9b366567773f775e0480e15c50df2ae64150b46f0b9224dfbaaba" exitCode=2 Nov 25 20:45:17 crc kubenswrapper[4983]: I1125 20:45:17.441544 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerDied","Data":"45abeb2ecf1dc6059abfb445da435aa27fa3cd007301a89ae0977ec8595a0ff9"} Nov 25 20:45:17 crc kubenswrapper[4983]: I1125 20:45:17.441614 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerDied","Data":"9ea46cdb47f9b366567773f775e0480e15c50df2ae64150b46f0b9224dfbaaba"} Nov 25 20:45:17 crc kubenswrapper[4983]: I1125 20:45:17.441637 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerDied","Data":"975306c0cd1a09c33e0405574bb510589cab3fcccf51b1e2c7027adcc2c33b21"} Nov 25 20:45:17 crc kubenswrapper[4983]: I1125 20:45:17.441585 4983 generic.go:334] "Generic (PLEG): container finished" podID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerID="975306c0cd1a09c33e0405574bb510589cab3fcccf51b1e2c7027adcc2c33b21" exitCode=0 Nov 25 20:45:22 crc kubenswrapper[4983]: I1125 20:45:22.506122 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerID="258950fcf68cd7c9df940549ae0451b1b9f70f389d65405e3ab17da233c2b00c" exitCode=137 Nov 25 20:45:22 crc kubenswrapper[4983]: I1125 20:45:22.506269 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746b6775bd-26zqf" event={"ID":"1ac04518-4a47-43b3-8e9f-84e8f3a80648","Type":"ContainerDied","Data":"258950fcf68cd7c9df940549ae0451b1b9f70f389d65405e3ab17da233c2b00c"} Nov 25 20:45:23 crc kubenswrapper[4983]: I1125 20:45:23.518627 4983 generic.go:334] "Generic (PLEG): container finished" podID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerID="90132a70f624701dc9f0e53a5cdda0bda39a94fdbe408f96ad69deef92f3f8e0" exitCode=0 Nov 25 20:45:23 crc kubenswrapper[4983]: I1125 20:45:23.518679 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerDied","Data":"90132a70f624701dc9f0e53a5cdda0bda39a94fdbe408f96ad69deef92f3f8e0"} Nov 25 20:45:23 crc kubenswrapper[4983]: I1125 20:45:23.942673 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.000080 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.044857 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-config-data\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.044907 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-combined-ca-bundle\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.044984 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-tls-certs\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.045092 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-scripts\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.045138 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-secret-key\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.045177 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac04518-4a47-43b3-8e9f-84e8f3a80648-logs\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.045197 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xcjg\" (UniqueName: \"kubernetes.io/projected/1ac04518-4a47-43b3-8e9f-84e8f3a80648-kube-api-access-2xcjg\") pod \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\" (UID: \"1ac04518-4a47-43b3-8e9f-84e8f3a80648\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.048087 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac04518-4a47-43b3-8e9f-84e8f3a80648-logs" (OuterVolumeSpecName: "logs") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.055723 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.067478 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac04518-4a47-43b3-8e9f-84e8f3a80648-kube-api-access-2xcjg" (OuterVolumeSpecName: "kube-api-access-2xcjg") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "kube-api-access-2xcjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.074404 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-config-data" (OuterVolumeSpecName: "config-data") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.081692 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-scripts" (OuterVolumeSpecName: "scripts") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.090211 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.111635 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1ac04518-4a47-43b3-8e9f-84e8f3a80648" (UID: "1ac04518-4a47-43b3-8e9f-84e8f3a80648"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147571 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-combined-ca-bundle\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147658 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-config-data\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147688 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-sg-core-conf-yaml\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147767 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-scripts\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147799 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-run-httpd\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147887 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zrns\" (UniqueName: \"kubernetes.io/projected/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-kube-api-access-8zrns\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.147908 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-log-httpd\") pod \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\" (UID: \"eeb5b089-8340-4b71-88f9-7fcbeecb4d78\") " Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148507 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148609 4983 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148623 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ac04518-4a47-43b3-8e9f-84e8f3a80648-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148634 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xcjg\" (UniqueName: \"kubernetes.io/projected/1ac04518-4a47-43b3-8e9f-84e8f3a80648-kube-api-access-2xcjg\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148646 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148655 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148665 4983 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac04518-4a47-43b3-8e9f-84e8f3a80648-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.148674 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac04518-4a47-43b3-8e9f-84e8f3a80648-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.149085 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.150924 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-scripts" (OuterVolumeSpecName: "scripts") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.153778 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-kube-api-access-8zrns" (OuterVolumeSpecName: "kube-api-access-8zrns") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "kube-api-access-8zrns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.183498 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.229765 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.250326 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zrns\" (UniqueName: \"kubernetes.io/projected/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-kube-api-access-8zrns\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.250376 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.250386 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.250394 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.250402 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.250410 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.266775 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-config-data" (OuterVolumeSpecName: "config-data") pod "eeb5b089-8340-4b71-88f9-7fcbeecb4d78" (UID: "eeb5b089-8340-4b71-88f9-7fcbeecb4d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.352476 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeb5b089-8340-4b71-88f9-7fcbeecb4d78-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.530594 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-746b6775bd-26zqf" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.530627 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746b6775bd-26zqf" event={"ID":"1ac04518-4a47-43b3-8e9f-84e8f3a80648","Type":"ContainerDied","Data":"f0aa031ea4dce9deb3b2d4c187a122b147cdaa498a949d77e53e52a304b684d7"} Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.530687 4983 scope.go:117] "RemoveContainer" containerID="9598658aedba74555877ee6f6068a0ccf9b04456d13ea1fee47bfe7f3e7437f7" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.532738 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6fck8" event={"ID":"06ccc433-0041-48b3-906e-8b7ff8ef57ab","Type":"ContainerStarted","Data":"6c0510021931b2ce8a13c807efb5bd223725de7efb4d20772ed2ffe98ce28223"} Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.535637 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eeb5b089-8340-4b71-88f9-7fcbeecb4d78","Type":"ContainerDied","Data":"72bcec2716074406da7000f91f96c55230ca72a18403fa60033941f2f7d958e5"} Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.535700 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.618032 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6fck8" podStartSLOduration=2.148854679 podStartE2EDuration="11.61800486s" podCreationTimestamp="2025-11-25 20:45:13 +0000 UTC" firstStartedPulling="2025-11-25 20:45:14.151750527 +0000 UTC m=+1095.264283919" lastFinishedPulling="2025-11-25 20:45:23.620900708 +0000 UTC m=+1104.733434100" observedRunningTime="2025-11-25 20:45:24.58325288 +0000 UTC m=+1105.695786272" watchObservedRunningTime="2025-11-25 20:45:24.61800486 +0000 UTC m=+1105.730538252" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.693636 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.707625 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.724624 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-746b6775bd-26zqf"] Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.762666 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:24 crc kubenswrapper[4983]: E1125 20:45:24.763142 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763156 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" Nov 25 20:45:24 crc kubenswrapper[4983]: E1125 20:45:24.763174 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-central-agent" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763180 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-central-agent" Nov 25 20:45:24 crc kubenswrapper[4983]: E1125 20:45:24.763205 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-notification-agent" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763212 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-notification-agent" Nov 25 20:45:24 crc kubenswrapper[4983]: E1125 20:45:24.763223 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon-log" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763229 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon-log" Nov 25 20:45:24 crc kubenswrapper[4983]: E1125 20:45:24.763253 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="proxy-httpd" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763258 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="proxy-httpd" Nov 25 20:45:24 crc kubenswrapper[4983]: E1125 20:45:24.763272 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="sg-core" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763278 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="sg-core" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763455 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763475 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="proxy-httpd" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763486 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-notification-agent" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763497 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" containerName="horizon-log" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763513 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="ceilometer-central-agent" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.763527 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" containerName="sg-core" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.765954 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.768925 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.769366 4983 scope.go:117] "RemoveContainer" containerID="258950fcf68cd7c9df940549ae0451b1b9f70f389d65405e3ab17da233c2b00c" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.771610 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.780277 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-746b6775bd-26zqf"] Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.804244 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.811521 4983 scope.go:117] "RemoveContainer" containerID="45abeb2ecf1dc6059abfb445da435aa27fa3cd007301a89ae0977ec8595a0ff9" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.827665 4983 scope.go:117] "RemoveContainer" containerID="9ea46cdb47f9b366567773f775e0480e15c50df2ae64150b46f0b9224dfbaaba" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.847486 4983 scope.go:117] "RemoveContainer" containerID="975306c0cd1a09c33e0405574bb510589cab3fcccf51b1e2c7027adcc2c33b21" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868682 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-run-httpd\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868760 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868804 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-config-data\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868830 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868857 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-scripts\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868923 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-log-httpd\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.868965 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bsmx\" (UniqueName: \"kubernetes.io/projected/881e0d16-6157-4835-8e2f-b8e1ef0f584f-kube-api-access-6bsmx\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.875567 4983 scope.go:117] "RemoveContainer" containerID="90132a70f624701dc9f0e53a5cdda0bda39a94fdbe408f96ad69deef92f3f8e0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.970959 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-log-httpd\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.971045 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bsmx\" (UniqueName: \"kubernetes.io/projected/881e0d16-6157-4835-8e2f-b8e1ef0f584f-kube-api-access-6bsmx\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.971114 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-run-httpd\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.971153 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.971183 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-config-data\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.971212 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.971251 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-scripts\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.973094 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-run-httpd\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.973374 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-log-httpd\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.977705 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-config-data\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.978043 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.979045 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.989032 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-scripts\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:24 crc kubenswrapper[4983]: I1125 20:45:24.998079 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bsmx\" (UniqueName: \"kubernetes.io/projected/881e0d16-6157-4835-8e2f-b8e1ef0f584f-kube-api-access-6bsmx\") pod \"ceilometer-0\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " pod="openstack/ceilometer-0" Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.103319 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.454540 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.455131 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-log" containerID="cri-o://8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61" gracePeriod=30 Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.455676 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-httpd" containerID="cri-o://4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a" gracePeriod=30 Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.618375 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac04518-4a47-43b3-8e9f-84e8f3a80648" path="/var/lib/kubelet/pods/1ac04518-4a47-43b3-8e9f-84e8f3a80648/volumes" Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.619168 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb5b089-8340-4b71-88f9-7fcbeecb4d78" path="/var/lib/kubelet/pods/eeb5b089-8340-4b71-88f9-7fcbeecb4d78/volumes" Nov 25 20:45:25 crc kubenswrapper[4983]: I1125 20:45:25.653046 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:25 crc kubenswrapper[4983]: W1125 20:45:25.666076 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod881e0d16_6157_4835_8e2f_b8e1ef0f584f.slice/crio-c51cb0224c3134e3aa42977bfd9ad2a5096c1a063b767888b9175285bbb90e7c WatchSource:0}: Error finding container c51cb0224c3134e3aa42977bfd9ad2a5096c1a063b767888b9175285bbb90e7c: Status 404 returned error can't find the container with id c51cb0224c3134e3aa42977bfd9ad2a5096c1a063b767888b9175285bbb90e7c Nov 25 20:45:26 crc kubenswrapper[4983]: I1125 20:45:26.558183 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerStarted","Data":"24ebd65fc3f10220aabed64e5c134aa7f3d69dcc3d69b2bb6a13fabb43247cd7"} Nov 25 20:45:26 crc kubenswrapper[4983]: I1125 20:45:26.559032 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerStarted","Data":"c51cb0224c3134e3aa42977bfd9ad2a5096c1a063b767888b9175285bbb90e7c"} Nov 25 20:45:26 crc kubenswrapper[4983]: I1125 20:45:26.561194 4983 generic.go:334] "Generic (PLEG): container finished" podID="dc01c1b2-f944-418e-93e6-3022566892b5" containerID="8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61" exitCode=143 Nov 25 20:45:26 crc kubenswrapper[4983]: I1125 20:45:26.561226 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc01c1b2-f944-418e-93e6-3022566892b5","Type":"ContainerDied","Data":"8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61"} Nov 25 20:45:27 crc kubenswrapper[4983]: I1125 20:45:27.397875 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:27 crc kubenswrapper[4983]: I1125 20:45:27.574635 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerStarted","Data":"104a564bc11dae587332978b6dbfbcb8f7e3baecd48c75a59352495696013c06"} Nov 25 20:45:27 crc kubenswrapper[4983]: I1125 20:45:27.615531 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:45:27 crc kubenswrapper[4983]: I1125 20:45:27.615869 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-log" containerID="cri-o://5174a126032da749d36001d8aec55c44cdc275096d8acf2fc759afd0a2a5f9de" gracePeriod=30 Nov 25 20:45:27 crc kubenswrapper[4983]: I1125 20:45:27.615963 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-httpd" containerID="cri-o://f50346e4e65d94d575c5457244b88459a1177672fd89d5a5b3b3538898b2c7b1" gracePeriod=30 Nov 25 20:45:28 crc kubenswrapper[4983]: I1125 20:45:28.586616 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ac3ba27-a414-4c7c-b0c5-5d728781ec91","Type":"ContainerDied","Data":"5174a126032da749d36001d8aec55c44cdc275096d8acf2fc759afd0a2a5f9de"} Nov 25 20:45:28 crc kubenswrapper[4983]: I1125 20:45:28.586520 4983 generic.go:334] "Generic (PLEG): container finished" podID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerID="5174a126032da749d36001d8aec55c44cdc275096d8acf2fc759afd0a2a5f9de" exitCode=143 Nov 25 20:45:28 crc kubenswrapper[4983]: I1125 20:45:28.589661 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerStarted","Data":"82ba174ff04c066909c8ff6729ef2f491443ca31e83f226d4b6b96de2fc24ff1"} Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.167317 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362231 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-logs\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362660 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-config-data\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362685 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-httpd-run\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362821 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-combined-ca-bundle\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362922 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-scripts\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362973 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwn9m\" (UniqueName: \"kubernetes.io/projected/dc01c1b2-f944-418e-93e6-3022566892b5-kube-api-access-wwn9m\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.362998 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.363081 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.363160 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-public-tls-certs\") pod \"dc01c1b2-f944-418e-93e6-3022566892b5\" (UID: \"dc01c1b2-f944-418e-93e6-3022566892b5\") " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.363168 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-logs" (OuterVolumeSpecName: "logs") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.363727 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.363748 4983 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc01c1b2-f944-418e-93e6-3022566892b5-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.367659 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-scripts" (OuterVolumeSpecName: "scripts") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.367704 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.377743 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc01c1b2-f944-418e-93e6-3022566892b5-kube-api-access-wwn9m" (OuterVolumeSpecName: "kube-api-access-wwn9m") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "kube-api-access-wwn9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.402625 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.427714 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.427901 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-config-data" (OuterVolumeSpecName: "config-data") pod "dc01c1b2-f944-418e-93e6-3022566892b5" (UID: "dc01c1b2-f944-418e-93e6-3022566892b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.464958 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.464998 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.465008 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwn9m\" (UniqueName: \"kubernetes.io/projected/dc01c1b2-f944-418e-93e6-3022566892b5-kube-api-access-wwn9m\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.465047 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.465057 4983 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.465071 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc01c1b2-f944-418e-93e6-3022566892b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.488456 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.566818 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.602053 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerStarted","Data":"a1eabc9e98bcbbae43c6bdd72bed66033d04fb7afd47aa790995c8122fbc1913"} Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.602212 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="proxy-httpd" containerID="cri-o://a1eabc9e98bcbbae43c6bdd72bed66033d04fb7afd47aa790995c8122fbc1913" gracePeriod=30 Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.602245 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.602258 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="sg-core" containerID="cri-o://82ba174ff04c066909c8ff6729ef2f491443ca31e83f226d4b6b96de2fc24ff1" gracePeriod=30 Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.602280 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-notification-agent" containerID="cri-o://104a564bc11dae587332978b6dbfbcb8f7e3baecd48c75a59352495696013c06" gracePeriod=30 Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.602297 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-central-agent" containerID="cri-o://24ebd65fc3f10220aabed64e5c134aa7f3d69dcc3d69b2bb6a13fabb43247cd7" gracePeriod=30 Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.613094 4983 generic.go:334] "Generic (PLEG): container finished" podID="dc01c1b2-f944-418e-93e6-3022566892b5" containerID="4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a" exitCode=0 Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.613176 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.624412 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc01c1b2-f944-418e-93e6-3022566892b5","Type":"ContainerDied","Data":"4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a"} Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.624445 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc01c1b2-f944-418e-93e6-3022566892b5","Type":"ContainerDied","Data":"71ea98cddc95945425b074e17de06518381dca488dba5a921550dc531bf6a708"} Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.624464 4983 scope.go:117] "RemoveContainer" containerID="4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.627738 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3940947550000002 podStartE2EDuration="5.627718545s" podCreationTimestamp="2025-11-25 20:45:24 +0000 UTC" firstStartedPulling="2025-11-25 20:45:25.668621067 +0000 UTC m=+1106.781154459" lastFinishedPulling="2025-11-25 20:45:28.902244847 +0000 UTC m=+1110.014778249" observedRunningTime="2025-11-25 20:45:29.627716035 +0000 UTC m=+1110.740249437" watchObservedRunningTime="2025-11-25 20:45:29.627718545 +0000 UTC m=+1110.740251937" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.660079 4983 scope.go:117] "RemoveContainer" containerID="8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.666695 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.672344 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.704373 4983 scope.go:117] "RemoveContainer" containerID="4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a" Nov 25 20:45:29 crc kubenswrapper[4983]: E1125 20:45:29.705503 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a\": container with ID starting with 4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a not found: ID does not exist" containerID="4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.705786 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a"} err="failed to get container status \"4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a\": rpc error: code = NotFound desc = could not find container \"4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a\": container with ID starting with 4a08085d7fe3194a28a40f919f525080d2d18126f4bc138e70eaff0405640b2a not found: ID does not exist" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.705937 4983 scope.go:117] "RemoveContainer" containerID="8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61" Nov 25 20:45:29 crc kubenswrapper[4983]: E1125 20:45:29.706760 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61\": container with ID starting with 8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61 not found: ID does not exist" containerID="8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.706813 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61"} err="failed to get container status \"8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61\": rpc error: code = NotFound desc = could not find container \"8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61\": container with ID starting with 8d137886f9070d246bfc53c7fbaa44b4d1650fd06da9a4c37a8a07e80c963c61 not found: ID does not exist" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.710622 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:45:29 crc kubenswrapper[4983]: E1125 20:45:29.711126 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-httpd" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.711138 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-httpd" Nov 25 20:45:29 crc kubenswrapper[4983]: E1125 20:45:29.711150 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-log" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.711156 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-log" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.711314 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-httpd" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.711341 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" containerName="glance-log" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.712335 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.714948 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.715252 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.728499 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.874711 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspm9\" (UniqueName: \"kubernetes.io/projected/76343139-3638-4cc2-a865-ddb20d2d35a6-kube-api-access-dspm9\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.874777 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76343139-3638-4cc2-a865-ddb20d2d35a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.874795 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.874908 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76343139-3638-4cc2-a865-ddb20d2d35a6-logs\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.874929 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.874972 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.875006 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.875296 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.976731 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76343139-3638-4cc2-a865-ddb20d2d35a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977275 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977310 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/76343139-3638-4cc2-a865-ddb20d2d35a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977321 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76343139-3638-4cc2-a865-ddb20d2d35a6-logs\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977403 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977453 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977570 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977642 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.977777 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76343139-3638-4cc2-a865-ddb20d2d35a6-logs\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.978168 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.978371 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspm9\" (UniqueName: \"kubernetes.io/projected/76343139-3638-4cc2-a865-ddb20d2d35a6-kube-api-access-dspm9\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.983527 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.988407 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.993166 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.994741 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76343139-3638-4cc2-a865-ddb20d2d35a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:29 crc kubenswrapper[4983]: I1125 20:45:29.998305 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspm9\" (UniqueName: \"kubernetes.io/projected/76343139-3638-4cc2-a865-ddb20d2d35a6-kube-api-access-dspm9\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.016861 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"76343139-3638-4cc2-a865-ddb20d2d35a6\") " pod="openstack/glance-default-external-api-0" Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.098357 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.628429 4983 generic.go:334] "Generic (PLEG): container finished" podID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerID="a1eabc9e98bcbbae43c6bdd72bed66033d04fb7afd47aa790995c8122fbc1913" exitCode=0 Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.628935 4983 generic.go:334] "Generic (PLEG): container finished" podID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerID="82ba174ff04c066909c8ff6729ef2f491443ca31e83f226d4b6b96de2fc24ff1" exitCode=2 Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.628580 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerDied","Data":"a1eabc9e98bcbbae43c6bdd72bed66033d04fb7afd47aa790995c8122fbc1913"} Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.629004 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerDied","Data":"82ba174ff04c066909c8ff6729ef2f491443ca31e83f226d4b6b96de2fc24ff1"} Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.629020 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerDied","Data":"104a564bc11dae587332978b6dbfbcb8f7e3baecd48c75a59352495696013c06"} Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.628952 4983 generic.go:334] "Generic (PLEG): container finished" podID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerID="104a564bc11dae587332978b6dbfbcb8f7e3baecd48c75a59352495696013c06" exitCode=0 Nov 25 20:45:30 crc kubenswrapper[4983]: W1125 20:45:30.682483 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76343139_3638_4cc2_a865_ddb20d2d35a6.slice/crio-d9f9c8b01b5a23d5983c8c48db3f3591fac7711ac74bf5955e13ae8fbec2f709 WatchSource:0}: Error finding container d9f9c8b01b5a23d5983c8c48db3f3591fac7711ac74bf5955e13ae8fbec2f709: Status 404 returned error can't find the container with id d9f9c8b01b5a23d5983c8c48db3f3591fac7711ac74bf5955e13ae8fbec2f709 Nov 25 20:45:30 crc kubenswrapper[4983]: I1125 20:45:30.697496 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.625310 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc01c1b2-f944-418e-93e6-3022566892b5" path="/var/lib/kubelet/pods/dc01c1b2-f944-418e-93e6-3022566892b5/volumes" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.646689 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76343139-3638-4cc2-a865-ddb20d2d35a6","Type":"ContainerStarted","Data":"9731e1c8a9425b004d08bf2b81223ab1737c62178d58defe2983e1c4af166574"} Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.646735 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76343139-3638-4cc2-a865-ddb20d2d35a6","Type":"ContainerStarted","Data":"d9f9c8b01b5a23d5983c8c48db3f3591fac7711ac74bf5955e13ae8fbec2f709"} Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.651796 4983 generic.go:334] "Generic (PLEG): container finished" podID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerID="f50346e4e65d94d575c5457244b88459a1177672fd89d5a5b3b3538898b2c7b1" exitCode=0 Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.651837 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ac3ba27-a414-4c7c-b0c5-5d728781ec91","Type":"ContainerDied","Data":"f50346e4e65d94d575c5457244b88459a1177672fd89d5a5b3b3538898b2c7b1"} Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.651867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2ac3ba27-a414-4c7c-b0c5-5d728781ec91","Type":"ContainerDied","Data":"636e4701646f72fcae36d4c5d67dc6f10edfe31e6573ceb50b04e24b7ca6dc83"} Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.651880 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636e4701646f72fcae36d4c5d67dc6f10edfe31e6573ceb50b04e24b7ca6dc83" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.664999 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.816518 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-combined-ca-bundle\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817032 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817160 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp4k8\" (UniqueName: \"kubernetes.io/projected/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-kube-api-access-mp4k8\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817220 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-httpd-run\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817255 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-internal-tls-certs\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817318 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-scripts\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817376 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-config-data\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.817411 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-logs\") pod \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\" (UID: \"2ac3ba27-a414-4c7c-b0c5-5d728781ec91\") " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.818301 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-logs" (OuterVolumeSpecName: "logs") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.819406 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.835043 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-scripts" (OuterVolumeSpecName: "scripts") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.835923 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.845793 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-kube-api-access-mp4k8" (OuterVolumeSpecName: "kube-api-access-mp4k8") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "kube-api-access-mp4k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.883965 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.915918 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919217 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919248 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919259 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919285 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919296 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp4k8\" (UniqueName: \"kubernetes.io/projected/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-kube-api-access-mp4k8\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919305 4983 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.919313 4983 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.932871 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-config-data" (OuterVolumeSpecName: "config-data") pod "2ac3ba27-a414-4c7c-b0c5-5d728781ec91" (UID: "2ac3ba27-a414-4c7c-b0c5-5d728781ec91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:31 crc kubenswrapper[4983]: I1125 20:45:31.948415 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.020973 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.021171 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ac3ba27-a414-4c7c-b0c5-5d728781ec91-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.663827 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"76343139-3638-4cc2-a865-ddb20d2d35a6","Type":"ContainerStarted","Data":"a162424df82b1fc10d8034cc9259e6dde337ed50758bc40572733afe4f68bacf"} Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.663909 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.692093 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.6920693509999998 podStartE2EDuration="3.692069351s" podCreationTimestamp="2025-11-25 20:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:32.688709983 +0000 UTC m=+1113.801243375" watchObservedRunningTime="2025-11-25 20:45:32.692069351 +0000 UTC m=+1113.804602743" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.717883 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.729773 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.749163 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:45:32 crc kubenswrapper[4983]: E1125 20:45:32.749779 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-log" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.749868 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-log" Nov 25 20:45:32 crc kubenswrapper[4983]: E1125 20:45:32.749955 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-httpd" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.750009 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-httpd" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.750274 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-httpd" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.750350 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" containerName="glance-log" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.751346 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.755771 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.762300 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.764206 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.837889 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838103 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2eefb7f-341f-4f91-8b67-2fc45217b414-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838196 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838506 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqtml\" (UniqueName: \"kubernetes.io/projected/e2eefb7f-341f-4f91-8b67-2fc45217b414-kube-api-access-qqtml\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838716 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838821 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838888 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2eefb7f-341f-4f91-8b67-2fc45217b414-logs\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.838948 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940439 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940499 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940526 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2eefb7f-341f-4f91-8b67-2fc45217b414-logs\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940565 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940617 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940650 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2eefb7f-341f-4f91-8b67-2fc45217b414-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940671 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940722 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqtml\" (UniqueName: \"kubernetes.io/projected/e2eefb7f-341f-4f91-8b67-2fc45217b414-kube-api-access-qqtml\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.940800 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.941092 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2eefb7f-341f-4f91-8b67-2fc45217b414-logs\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.941353 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2eefb7f-341f-4f91-8b67-2fc45217b414-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.951104 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.951201 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.952278 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.957483 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2eefb7f-341f-4f91-8b67-2fc45217b414-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.962977 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqtml\" (UniqueName: \"kubernetes.io/projected/e2eefb7f-341f-4f91-8b67-2fc45217b414-kube-api-access-qqtml\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:32 crc kubenswrapper[4983]: I1125 20:45:32.975003 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"e2eefb7f-341f-4f91-8b67-2fc45217b414\") " pod="openstack/glance-default-internal-api-0" Nov 25 20:45:33 crc kubenswrapper[4983]: I1125 20:45:33.089901 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:33 crc kubenswrapper[4983]: I1125 20:45:33.618344 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac3ba27-a414-4c7c-b0c5-5d728781ec91" path="/var/lib/kubelet/pods/2ac3ba27-a414-4c7c-b0c5-5d728781ec91/volumes" Nov 25 20:45:33 crc kubenswrapper[4983]: I1125 20:45:33.775601 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 20:45:34 crc kubenswrapper[4983]: I1125 20:45:34.687284 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2eefb7f-341f-4f91-8b67-2fc45217b414","Type":"ContainerStarted","Data":"0cbc1cb6691583374760007ff2bce910b8184afebed6669e515e4ab7b3c45cef"} Nov 25 20:45:34 crc kubenswrapper[4983]: I1125 20:45:34.687342 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2eefb7f-341f-4f91-8b67-2fc45217b414","Type":"ContainerStarted","Data":"5ee6aaf68630a1af8fc05f3bb6058d3c10a2093453a101e50d4c9c5067ee2443"} Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.713948 4983 generic.go:334] "Generic (PLEG): container finished" podID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerID="24ebd65fc3f10220aabed64e5c134aa7f3d69dcc3d69b2bb6a13fabb43247cd7" exitCode=0 Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.714690 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerDied","Data":"24ebd65fc3f10220aabed64e5c134aa7f3d69dcc3d69b2bb6a13fabb43247cd7"} Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.718351 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e2eefb7f-341f-4f91-8b67-2fc45217b414","Type":"ContainerStarted","Data":"71983e236367143052c34832b921ab983d7deb4c7d09983e2b9d5143269c1575"} Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.741045 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.741023372 podStartE2EDuration="3.741023372s" podCreationTimestamp="2025-11-25 20:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:35.73493468 +0000 UTC m=+1116.847468072" watchObservedRunningTime="2025-11-25 20:45:35.741023372 +0000 UTC m=+1116.853556764" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.820292 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.939382 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-log-httpd\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.939434 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-combined-ca-bundle\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.939699 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bsmx\" (UniqueName: \"kubernetes.io/projected/881e0d16-6157-4835-8e2f-b8e1ef0f584f-kube-api-access-6bsmx\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.939800 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-scripts\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.939903 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-sg-core-conf-yaml\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.940027 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-run-httpd\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.940161 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-config-data\") pod \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\" (UID: \"881e0d16-6157-4835-8e2f-b8e1ef0f584f\") " Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.940164 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.940671 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.941374 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.950891 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-scripts" (OuterVolumeSpecName: "scripts") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.972421 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881e0d16-6157-4835-8e2f-b8e1ef0f584f-kube-api-access-6bsmx" (OuterVolumeSpecName: "kube-api-access-6bsmx") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "kube-api-access-6bsmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:35 crc kubenswrapper[4983]: I1125 20:45:35.983512 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.049219 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bsmx\" (UniqueName: \"kubernetes.io/projected/881e0d16-6157-4835-8e2f-b8e1ef0f584f-kube-api-access-6bsmx\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.049255 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.049264 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.049296 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/881e0d16-6157-4835-8e2f-b8e1ef0f584f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.051128 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.063713 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-config-data" (OuterVolumeSpecName: "config-data") pod "881e0d16-6157-4835-8e2f-b8e1ef0f584f" (UID: "881e0d16-6157-4835-8e2f-b8e1ef0f584f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.151880 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.151935 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/881e0d16-6157-4835-8e2f-b8e1ef0f584f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.745642 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.746299 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"881e0d16-6157-4835-8e2f-b8e1ef0f584f","Type":"ContainerDied","Data":"c51cb0224c3134e3aa42977bfd9ad2a5096c1a063b767888b9175285bbb90e7c"} Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.747369 4983 scope.go:117] "RemoveContainer" containerID="a1eabc9e98bcbbae43c6bdd72bed66033d04fb7afd47aa790995c8122fbc1913" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.789199 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.796864 4983 scope.go:117] "RemoveContainer" containerID="82ba174ff04c066909c8ff6729ef2f491443ca31e83f226d4b6b96de2fc24ff1" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.800541 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.823596 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:36 crc kubenswrapper[4983]: E1125 20:45:36.824310 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-central-agent" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824340 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-central-agent" Nov 25 20:45:36 crc kubenswrapper[4983]: E1125 20:45:36.824381 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-notification-agent" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824390 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-notification-agent" Nov 25 20:45:36 crc kubenswrapper[4983]: E1125 20:45:36.824404 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="sg-core" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824413 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="sg-core" Nov 25 20:45:36 crc kubenswrapper[4983]: E1125 20:45:36.824432 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="proxy-httpd" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824439 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="proxy-httpd" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824856 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="proxy-httpd" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824886 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-central-agent" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824909 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="sg-core" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.824931 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" containerName="ceilometer-notification-agent" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.827589 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.831144 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.838292 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.841007 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.878166 4983 scope.go:117] "RemoveContainer" containerID="104a564bc11dae587332978b6dbfbcb8f7e3baecd48c75a59352495696013c06" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.910022 4983 scope.go:117] "RemoveContainer" containerID="24ebd65fc3f10220aabed64e5c134aa7f3d69dcc3d69b2bb6a13fabb43247cd7" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.973371 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.973449 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdlbp\" (UniqueName: \"kubernetes.io/projected/0a71be43-125f-433d-8c68-9632f83b55f0-kube-api-access-hdlbp\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.973955 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-run-httpd\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.974293 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-scripts\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.974500 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.974672 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-log-httpd\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:36 crc kubenswrapper[4983]: I1125 20:45:36.974781 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-config-data\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078608 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-run-httpd\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078744 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-scripts\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078802 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078835 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-log-httpd\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078857 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-config-data\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078923 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.078948 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdlbp\" (UniqueName: \"kubernetes.io/projected/0a71be43-125f-433d-8c68-9632f83b55f0-kube-api-access-hdlbp\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.079239 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-run-httpd\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.079332 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-log-httpd\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.086039 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-scripts\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.086536 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.087300 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-config-data\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.088073 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.095423 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdlbp\" (UniqueName: \"kubernetes.io/projected/0a71be43-125f-433d-8c68-9632f83b55f0-kube-api-access-hdlbp\") pod \"ceilometer-0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.168530 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.618191 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881e0d16-6157-4835-8e2f-b8e1ef0f584f" path="/var/lib/kubelet/pods/881e0d16-6157-4835-8e2f-b8e1ef0f584f/volumes" Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.702069 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.774129 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerStarted","Data":"55e84bba8fbbe83190b56a76a78b6cb4223a5b80d7f0e99e0581c3dc188e5cdf"} Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.775804 4983 generic.go:334] "Generic (PLEG): container finished" podID="06ccc433-0041-48b3-906e-8b7ff8ef57ab" containerID="6c0510021931b2ce8a13c807efb5bd223725de7efb4d20772ed2ffe98ce28223" exitCode=0 Nov 25 20:45:37 crc kubenswrapper[4983]: I1125 20:45:37.775863 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6fck8" event={"ID":"06ccc433-0041-48b3-906e-8b7ff8ef57ab","Type":"ContainerDied","Data":"6c0510021931b2ce8a13c807efb5bd223725de7efb4d20772ed2ffe98ce28223"} Nov 25 20:45:38 crc kubenswrapper[4983]: I1125 20:45:38.787849 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerStarted","Data":"acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3"} Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.186297 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.360374 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7q56\" (UniqueName: \"kubernetes.io/projected/06ccc433-0041-48b3-906e-8b7ff8ef57ab-kube-api-access-v7q56\") pod \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.360982 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-scripts\") pod \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.361111 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-config-data\") pod \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.361183 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-combined-ca-bundle\") pod \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\" (UID: \"06ccc433-0041-48b3-906e-8b7ff8ef57ab\") " Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.364549 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-scripts" (OuterVolumeSpecName: "scripts") pod "06ccc433-0041-48b3-906e-8b7ff8ef57ab" (UID: "06ccc433-0041-48b3-906e-8b7ff8ef57ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.366719 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ccc433-0041-48b3-906e-8b7ff8ef57ab-kube-api-access-v7q56" (OuterVolumeSpecName: "kube-api-access-v7q56") pod "06ccc433-0041-48b3-906e-8b7ff8ef57ab" (UID: "06ccc433-0041-48b3-906e-8b7ff8ef57ab"). InnerVolumeSpecName "kube-api-access-v7q56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.389723 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06ccc433-0041-48b3-906e-8b7ff8ef57ab" (UID: "06ccc433-0041-48b3-906e-8b7ff8ef57ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.395672 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-config-data" (OuterVolumeSpecName: "config-data") pod "06ccc433-0041-48b3-906e-8b7ff8ef57ab" (UID: "06ccc433-0041-48b3-906e-8b7ff8ef57ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.462990 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.463027 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.463040 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7q56\" (UniqueName: \"kubernetes.io/projected/06ccc433-0041-48b3-906e-8b7ff8ef57ab-kube-api-access-v7q56\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.463049 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06ccc433-0041-48b3-906e-8b7ff8ef57ab-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.838746 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerStarted","Data":"da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141"} Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.840461 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6fck8" event={"ID":"06ccc433-0041-48b3-906e-8b7ff8ef57ab","Type":"ContainerDied","Data":"b0bf33770ea5ca4fcc4c79341d28b4f79d7cc3531f6f5124e26cf611f5a3a693"} Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.840493 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0bf33770ea5ca4fcc4c79341d28b4f79d7cc3531f6f5124e26cf611f5a3a693" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.840576 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6fck8" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.889389 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 20:45:39 crc kubenswrapper[4983]: E1125 20:45:39.906540 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ccc433-0041-48b3-906e-8b7ff8ef57ab" containerName="nova-cell0-conductor-db-sync" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.906593 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ccc433-0041-48b3-906e-8b7ff8ef57ab" containerName="nova-cell0-conductor-db-sync" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.907749 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ccc433-0041-48b3-906e-8b7ff8ef57ab" containerName="nova-cell0-conductor-db-sync" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.915189 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.916844 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.920381 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2dxqn" Nov 25 20:45:39 crc kubenswrapper[4983]: I1125 20:45:39.920520 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.074965 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279216cc-b7af-430b-95ff-07b9330eea8c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.075076 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wx86\" (UniqueName: \"kubernetes.io/projected/279216cc-b7af-430b-95ff-07b9330eea8c-kube-api-access-5wx86\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.075148 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279216cc-b7af-430b-95ff-07b9330eea8c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.099396 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.099487 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.127854 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.136489 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.176586 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279216cc-b7af-430b-95ff-07b9330eea8c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.176676 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wx86\" (UniqueName: \"kubernetes.io/projected/279216cc-b7af-430b-95ff-07b9330eea8c-kube-api-access-5wx86\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.176742 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279216cc-b7af-430b-95ff-07b9330eea8c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.188912 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/279216cc-b7af-430b-95ff-07b9330eea8c-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.188939 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/279216cc-b7af-430b-95ff-07b9330eea8c-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.199233 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wx86\" (UniqueName: \"kubernetes.io/projected/279216cc-b7af-430b-95ff-07b9330eea8c-kube-api-access-5wx86\") pod \"nova-cell0-conductor-0\" (UID: \"279216cc-b7af-430b-95ff-07b9330eea8c\") " pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.283260 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.755747 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.855946 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerStarted","Data":"4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7"} Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.858908 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"279216cc-b7af-430b-95ff-07b9330eea8c","Type":"ContainerStarted","Data":"d8df515582f62ca3c83c9d56f370f018777db90c991866129b2f73acada776d1"} Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.859395 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:45:40 crc kubenswrapper[4983]: I1125 20:45:40.859456 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 20:45:41 crc kubenswrapper[4983]: I1125 20:45:41.886799 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"279216cc-b7af-430b-95ff-07b9330eea8c","Type":"ContainerStarted","Data":"4861aa493114fd8e6c756e99e6131fdcddbcc8d28fc2132569e65688bd2f68f3"} Nov 25 20:45:41 crc kubenswrapper[4983]: I1125 20:45:41.887149 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:41 crc kubenswrapper[4983]: I1125 20:45:41.891784 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerStarted","Data":"a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae"} Nov 25 20:45:41 crc kubenswrapper[4983]: I1125 20:45:41.911835 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.9118182 podStartE2EDuration="2.9118182s" podCreationTimestamp="2025-11-25 20:45:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:41.908141442 +0000 UTC m=+1123.020674834" watchObservedRunningTime="2025-11-25 20:45:41.9118182 +0000 UTC m=+1123.024351592" Nov 25 20:45:41 crc kubenswrapper[4983]: I1125 20:45:41.936373 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.690414455 podStartE2EDuration="5.936341709s" podCreationTimestamp="2025-11-25 20:45:36 +0000 UTC" firstStartedPulling="2025-11-25 20:45:37.700953816 +0000 UTC m=+1118.813487218" lastFinishedPulling="2025-11-25 20:45:40.94688108 +0000 UTC m=+1122.059414472" observedRunningTime="2025-11-25 20:45:41.929668052 +0000 UTC m=+1123.042201444" watchObservedRunningTime="2025-11-25 20:45:41.936341709 +0000 UTC m=+1123.048875101" Nov 25 20:45:42 crc kubenswrapper[4983]: I1125 20:45:42.902981 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:45:42 crc kubenswrapper[4983]: I1125 20:45:42.956837 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 20:45:42 crc kubenswrapper[4983]: I1125 20:45:42.957032 4983 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.091169 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.091239 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.127423 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.128261 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.131597 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.910878 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:43 crc kubenswrapper[4983]: I1125 20:45:43.910924 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.308679 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.686972 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.875804 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.927335 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-bhd54"] Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.929699 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.933567 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 20:45:45 crc kubenswrapper[4983]: I1125 20:45:45.933802 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.006784 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bhd54"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.008600 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-config-data\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.009411 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-scripts\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.009464 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft68d\" (UniqueName: \"kubernetes.io/projected/4ae5733a-6f6c-40cb-bc80-0110e4549e58-kube-api-access-ft68d\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.009498 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.118333 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.118410 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-config-data\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.118533 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-scripts\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.118568 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft68d\" (UniqueName: \"kubernetes.io/projected/4ae5733a-6f6c-40cb-bc80-0110e4549e58-kube-api-access-ft68d\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.126270 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.126461 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.127512 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.128790 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-config-data\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.147170 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft68d\" (UniqueName: \"kubernetes.io/projected/4ae5733a-6f6c-40cb-bc80-0110e4549e58-kube-api-access-ft68d\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.156507 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-scripts\") pod \"nova-cell0-cell-mapping-bhd54\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.166067 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.179801 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.217886 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.219414 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.220159 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.220269 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.220311 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f69\" (UniqueName: \"kubernetes.io/projected/3173a912-98b8-4681-88a3-3903ad98a52d-kube-api-access-p7f69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.221230 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.274248 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.274734 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326463 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7f69\" (UniqueName: \"kubernetes.io/projected/3173a912-98b8-4681-88a3-3903ad98a52d-kube-api-access-p7f69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326588 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326625 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32608ae3-6acd-4024-ad12-7ed6476db3f1-logs\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326644 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kzp\" (UniqueName: \"kubernetes.io/projected/32608ae3-6acd-4024-ad12-7ed6476db3f1-kube-api-access-z5kzp\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326669 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326691 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-config-data\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.326733 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.329668 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.332997 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.333507 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.333922 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.337171 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.383396 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7f69\" (UniqueName: \"kubernetes.io/projected/3173a912-98b8-4681-88a3-3903ad98a52d-kube-api-access-p7f69\") pod \"nova-cell1-novncproxy-0\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.395339 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.445445 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32608ae3-6acd-4024-ad12-7ed6476db3f1-logs\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.449771 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kzp\" (UniqueName: \"kubernetes.io/projected/32608ae3-6acd-4024-ad12-7ed6476db3f1-kube-api-access-z5kzp\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.449880 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.449935 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-config-data\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.452212 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32608ae3-6acd-4024-ad12-7ed6476db3f1-logs\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.463204 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.489582 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kzp\" (UniqueName: \"kubernetes.io/projected/32608ae3-6acd-4024-ad12-7ed6476db3f1-kube-api-access-z5kzp\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.505662 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-config-data\") pod \"nova-api-0\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.514203 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rqg7j"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.519754 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.620412 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.621584 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-config\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.621660 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-svc\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.621791 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqrz\" (UniqueName: \"kubernetes.io/projected/27e29df2-0e09-486f-afe1-fb74a909567c-kube-api-access-clqrz\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.621854 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jgt\" (UniqueName: \"kubernetes.io/projected/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-kube-api-access-88jgt\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.621932 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-logs\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.621966 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-config-data\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.622027 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.622218 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.622284 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.622355 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.626164 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.637723 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.645947 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.646867 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.686110 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rqg7j"] Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.706463 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.727451 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqrz\" (UniqueName: \"kubernetes.io/projected/27e29df2-0e09-486f-afe1-fb74a909567c-kube-api-access-clqrz\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728003 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728046 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jgt\" (UniqueName: \"kubernetes.io/projected/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-kube-api-access-88jgt\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728107 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-config-data\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728155 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-logs\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728192 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-config-data\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728247 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728280 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728321 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728344 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728427 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-config\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728494 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-svc\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.728546 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5vk\" (UniqueName: \"kubernetes.io/projected/379634e2-47ea-442d-ac65-ba86166996c8-kube-api-access-md5vk\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.730379 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-logs\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.731346 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-config\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.731407 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.733456 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-svc\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.738295 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.744416 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-config-data\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.745035 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.747537 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.751659 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jgt\" (UniqueName: \"kubernetes.io/projected/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-kube-api-access-88jgt\") pod \"nova-metadata-0\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " pod="openstack/nova-metadata-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.755135 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqrz\" (UniqueName: \"kubernetes.io/projected/27e29df2-0e09-486f-afe1-fb74a909567c-kube-api-access-clqrz\") pod \"dnsmasq-dns-757b4f8459-rqg7j\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.831502 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-config-data\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.831657 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5vk\" (UniqueName: \"kubernetes.io/projected/379634e2-47ea-442d-ac65-ba86166996c8-kube-api-access-md5vk\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.831711 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.840542 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-config-data\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.847140 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.858624 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5vk\" (UniqueName: \"kubernetes.io/projected/379634e2-47ea-442d-ac65-ba86166996c8-kube-api-access-md5vk\") pod \"nova-scheduler-0\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " pod="openstack/nova-scheduler-0" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.974278 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:46 crc kubenswrapper[4983]: I1125 20:45:46.985135 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.001861 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-bhd54"] Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.039516 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.182329 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.254385 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.319844 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwl6b"] Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.321706 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.339729 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.339953 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.362483 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-config-data\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.362598 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-scripts\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.362775 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrkv\" (UniqueName: \"kubernetes.io/projected/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-kube-api-access-gfrkv\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.362948 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.388685 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwl6b"] Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.464370 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-config-data\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.464413 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-scripts\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.464447 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfrkv\" (UniqueName: \"kubernetes.io/projected/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-kube-api-access-gfrkv\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.464480 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.472247 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-config-data\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.473367 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.488392 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-scripts\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.490771 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfrkv\" (UniqueName: \"kubernetes.io/projected/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-kube-api-access-gfrkv\") pod \"nova-cell1-conductor-db-sync-nwl6b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.674213 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:47 crc kubenswrapper[4983]: W1125 20:45:47.680805 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0eb0fc0_da93_414d_b10a_eb65c6aeca38.slice/crio-880dd40db007e9a42e658b2562e90e6a1c55f314be77f54bfb0a3a7d68f228c6 WatchSource:0}: Error finding container 880dd40db007e9a42e658b2562e90e6a1c55f314be77f54bfb0a3a7d68f228c6: Status 404 returned error can't find the container with id 880dd40db007e9a42e658b2562e90e6a1c55f314be77f54bfb0a3a7d68f228c6 Nov 25 20:45:47 crc kubenswrapper[4983]: W1125 20:45:47.682392 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e29df2_0e09_486f_afe1_fb74a909567c.slice/crio-aa9657ef8fb07211a70a86ee63f0e28d1fc7d0a80cf0b5af767f237d51d4dfd3 WatchSource:0}: Error finding container aa9657ef8fb07211a70a86ee63f0e28d1fc7d0a80cf0b5af767f237d51d4dfd3: Status 404 returned error can't find the container with id aa9657ef8fb07211a70a86ee63f0e28d1fc7d0a80cf0b5af767f237d51d4dfd3 Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.684379 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rqg7j"] Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.771758 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:47 crc kubenswrapper[4983]: I1125 20:45:47.863942 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.001977 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32608ae3-6acd-4024-ad12-7ed6476db3f1","Type":"ContainerStarted","Data":"7e2c3c0d2334e282cfe36eb69d72d0c3abca1a778a216b3cea877c2ab46b4490"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.004162 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0eb0fc0-da93-414d-b10a-eb65c6aeca38","Type":"ContainerStarted","Data":"880dd40db007e9a42e658b2562e90e6a1c55f314be77f54bfb0a3a7d68f228c6"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.005400 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"379634e2-47ea-442d-ac65-ba86166996c8","Type":"ContainerStarted","Data":"f492f1ade8e106cc1a90389aa9fe1af2fbac5c41bb5b50bac2d3b10c00d636b5"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.009899 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" event={"ID":"27e29df2-0e09-486f-afe1-fb74a909567c","Type":"ContainerStarted","Data":"aa9657ef8fb07211a70a86ee63f0e28d1fc7d0a80cf0b5af767f237d51d4dfd3"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.017171 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bhd54" event={"ID":"4ae5733a-6f6c-40cb-bc80-0110e4549e58","Type":"ContainerStarted","Data":"88b7fc1934a5ad182064fdbcdabaef6361bcfe5a1eb614e2f2c498851f83eb08"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.017213 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bhd54" event={"ID":"4ae5733a-6f6c-40cb-bc80-0110e4549e58","Type":"ContainerStarted","Data":"5bed6dff0443998dc4184cbeb563afe3991176e76eddff13c7356a8dc11d4244"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.019169 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3173a912-98b8-4681-88a3-3903ad98a52d","Type":"ContainerStarted","Data":"a233626f20f423bbf7abf5d6e3b7888c5b74e6dc3ee6bb7b3bcafe0b9c999ec0"} Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.034875 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-bhd54" podStartSLOduration=3.034852053 podStartE2EDuration="3.034852053s" podCreationTimestamp="2025-11-25 20:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:48.032177062 +0000 UTC m=+1129.144710444" watchObservedRunningTime="2025-11-25 20:45:48.034852053 +0000 UTC m=+1129.147385445" Nov 25 20:45:48 crc kubenswrapper[4983]: I1125 20:45:48.422493 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwl6b"] Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.033076 4983 generic.go:334] "Generic (PLEG): container finished" podID="27e29df2-0e09-486f-afe1-fb74a909567c" containerID="e5a383900a489bd447ca33be85c057bbf0a472365e2dd94d8a88f84d1f177e7d" exitCode=0 Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.033330 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" event={"ID":"27e29df2-0e09-486f-afe1-fb74a909567c","Type":"ContainerDied","Data":"e5a383900a489bd447ca33be85c057bbf0a472365e2dd94d8a88f84d1f177e7d"} Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.037990 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" event={"ID":"81bdd9b1-6872-4f3c-afe2-4d403b0db52b","Type":"ContainerStarted","Data":"0f02a57f92677e8980e72f4f15170a2424d745ec975617511dfadab782547771"} Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.038053 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" event={"ID":"81bdd9b1-6872-4f3c-afe2-4d403b0db52b","Type":"ContainerStarted","Data":"fdece992a90af702f07d73f7720d60526fddd3188c6f8df1d02154b38dccd818"} Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.077472 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" podStartSLOduration=2.077448449 podStartE2EDuration="2.077448449s" podCreationTimestamp="2025-11-25 20:45:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:49.075309742 +0000 UTC m=+1130.187843174" watchObservedRunningTime="2025-11-25 20:45:49.077448449 +0000 UTC m=+1130.189981841" Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.628059 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:45:49 crc kubenswrapper[4983]: I1125 20:45:49.653391 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.075213 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" event={"ID":"27e29df2-0e09-486f-afe1-fb74a909567c","Type":"ContainerStarted","Data":"3d511550f6f50f21762d0a13d5eab8144b2a675eeaa76f06b8b92c3cecf1f47d"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.076171 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.078404 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32608ae3-6acd-4024-ad12-7ed6476db3f1","Type":"ContainerStarted","Data":"35ff35dafd0793f881a7ebb86e9bbc554932a1c7e59424da09eff644ce9ff89e"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.078461 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32608ae3-6acd-4024-ad12-7ed6476db3f1","Type":"ContainerStarted","Data":"57b7cfd97777b4dbad4036898be860733d61c8bb08087469a90ef4a476dbd8a3"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.080005 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3173a912-98b8-4681-88a3-3903ad98a52d","Type":"ContainerStarted","Data":"2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.080058 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3173a912-98b8-4681-88a3-3903ad98a52d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4" gracePeriod=30 Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.083133 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0eb0fc0-da93-414d-b10a-eb65c6aeca38","Type":"ContainerStarted","Data":"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.083168 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0eb0fc0-da93-414d-b10a-eb65c6aeca38","Type":"ContainerStarted","Data":"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.083343 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-log" containerID="cri-o://13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b" gracePeriod=30 Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.083463 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-metadata" containerID="cri-o://6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb" gracePeriod=30 Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.088478 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"379634e2-47ea-442d-ac65-ba86166996c8","Type":"ContainerStarted","Data":"5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6"} Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.116926 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" podStartSLOduration=6.116899596 podStartE2EDuration="6.116899596s" podCreationTimestamp="2025-11-25 20:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:52.099905517 +0000 UTC m=+1133.212438909" watchObservedRunningTime="2025-11-25 20:45:52.116899596 +0000 UTC m=+1133.229432988" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.135145 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.189396193 podStartE2EDuration="6.135106179s" podCreationTimestamp="2025-11-25 20:45:46 +0000 UTC" firstStartedPulling="2025-11-25 20:45:47.900714381 +0000 UTC m=+1129.013247773" lastFinishedPulling="2025-11-25 20:45:50.846424367 +0000 UTC m=+1131.958957759" observedRunningTime="2025-11-25 20:45:52.12495356 +0000 UTC m=+1133.237486972" watchObservedRunningTime="2025-11-25 20:45:52.135106179 +0000 UTC m=+1133.247639571" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.161129 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.57895091 podStartE2EDuration="6.161102187s" podCreationTimestamp="2025-11-25 20:45:46 +0000 UTC" firstStartedPulling="2025-11-25 20:45:47.227448565 +0000 UTC m=+1128.339981957" lastFinishedPulling="2025-11-25 20:45:50.809599832 +0000 UTC m=+1131.922133234" observedRunningTime="2025-11-25 20:45:52.149263243 +0000 UTC m=+1133.261796635" watchObservedRunningTime="2025-11-25 20:45:52.161102187 +0000 UTC m=+1133.273635579" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.174615 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.736651135 podStartE2EDuration="6.174587244s" podCreationTimestamp="2025-11-25 20:45:46 +0000 UTC" firstStartedPulling="2025-11-25 20:45:47.369742252 +0000 UTC m=+1128.482275644" lastFinishedPulling="2025-11-25 20:45:50.807678361 +0000 UTC m=+1131.920211753" observedRunningTime="2025-11-25 20:45:52.172473328 +0000 UTC m=+1133.285006750" watchObservedRunningTime="2025-11-25 20:45:52.174587244 +0000 UTC m=+1133.287120656" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.199683 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.076532035 podStartE2EDuration="6.199655438s" podCreationTimestamp="2025-11-25 20:45:46 +0000 UTC" firstStartedPulling="2025-11-25 20:45:47.686346245 +0000 UTC m=+1128.798879637" lastFinishedPulling="2025-11-25 20:45:50.809469648 +0000 UTC m=+1131.922003040" observedRunningTime="2025-11-25 20:45:52.191332637 +0000 UTC m=+1133.303866049" watchObservedRunningTime="2025-11-25 20:45:52.199655438 +0000 UTC m=+1133.312188830" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.802908 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.903914 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-logs\") pod \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.903985 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jgt\" (UniqueName: \"kubernetes.io/projected/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-kube-api-access-88jgt\") pod \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.904118 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-combined-ca-bundle\") pod \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.904207 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-config-data\") pod \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\" (UID: \"b0eb0fc0-da93-414d-b10a-eb65c6aeca38\") " Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.904368 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-logs" (OuterVolumeSpecName: "logs") pod "b0eb0fc0-da93-414d-b10a-eb65c6aeca38" (UID: "b0eb0fc0-da93-414d-b10a-eb65c6aeca38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.905002 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.912230 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-kube-api-access-88jgt" (OuterVolumeSpecName: "kube-api-access-88jgt") pod "b0eb0fc0-da93-414d-b10a-eb65c6aeca38" (UID: "b0eb0fc0-da93-414d-b10a-eb65c6aeca38"). InnerVolumeSpecName "kube-api-access-88jgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.935835 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0eb0fc0-da93-414d-b10a-eb65c6aeca38" (UID: "b0eb0fc0-da93-414d-b10a-eb65c6aeca38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:52 crc kubenswrapper[4983]: I1125 20:45:52.961266 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-config-data" (OuterVolumeSpecName: "config-data") pod "b0eb0fc0-da93-414d-b10a-eb65c6aeca38" (UID: "b0eb0fc0-da93-414d-b10a-eb65c6aeca38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.008387 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jgt\" (UniqueName: \"kubernetes.io/projected/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-kube-api-access-88jgt\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.008497 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.008726 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0eb0fc0-da93-414d-b10a-eb65c6aeca38-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107141 4983 generic.go:334] "Generic (PLEG): container finished" podID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerID="6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb" exitCode=0 Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107186 4983 generic.go:334] "Generic (PLEG): container finished" podID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerID="13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b" exitCode=143 Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107252 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107327 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0eb0fc0-da93-414d-b10a-eb65c6aeca38","Type":"ContainerDied","Data":"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb"} Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107368 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0eb0fc0-da93-414d-b10a-eb65c6aeca38","Type":"ContainerDied","Data":"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b"} Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107380 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0eb0fc0-da93-414d-b10a-eb65c6aeca38","Type":"ContainerDied","Data":"880dd40db007e9a42e658b2562e90e6a1c55f314be77f54bfb0a3a7d68f228c6"} Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.107402 4983 scope.go:117] "RemoveContainer" containerID="6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.149809 4983 scope.go:117] "RemoveContainer" containerID="13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.172537 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.186354 4983 scope.go:117] "RemoveContainer" containerID="6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb" Nov 25 20:45:53 crc kubenswrapper[4983]: E1125 20:45:53.186947 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb\": container with ID starting with 6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb not found: ID does not exist" containerID="6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187008 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb"} err="failed to get container status \"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb\": rpc error: code = NotFound desc = could not find container \"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb\": container with ID starting with 6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb not found: ID does not exist" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187042 4983 scope.go:117] "RemoveContainer" containerID="13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b" Nov 25 20:45:53 crc kubenswrapper[4983]: E1125 20:45:53.187403 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b\": container with ID starting with 13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b not found: ID does not exist" containerID="13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187442 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b"} err="failed to get container status \"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b\": rpc error: code = NotFound desc = could not find container \"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b\": container with ID starting with 13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b not found: ID does not exist" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187472 4983 scope.go:117] "RemoveContainer" containerID="6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187732 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb"} err="failed to get container status \"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb\": rpc error: code = NotFound desc = could not find container \"6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb\": container with ID starting with 6f939deecc8791d0e95bc4aab2274910bd5670858da56c72211b6e1380b562eb not found: ID does not exist" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187759 4983 scope.go:117] "RemoveContainer" containerID="13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.187974 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b"} err="failed to get container status \"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b\": rpc error: code = NotFound desc = could not find container \"13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b\": container with ID starting with 13c2076b1bf7f23b2841ebb29d912e92a82b7f28d0a099a98ede86133832530b not found: ID does not exist" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.196654 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.210983 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:53 crc kubenswrapper[4983]: E1125 20:45:53.211529 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-log" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.211583 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-log" Nov 25 20:45:53 crc kubenswrapper[4983]: E1125 20:45:53.211622 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-metadata" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.211634 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-metadata" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.211925 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-metadata" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.211958 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" containerName="nova-metadata-log" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.213533 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.220025 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.220631 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.227805 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.317587 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-config-data\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.317653 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.318044 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.318212 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crq45\" (UniqueName: \"kubernetes.io/projected/1c84f744-8279-47bd-9c97-84da09c9c727-kube-api-access-crq45\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.318290 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c84f744-8279-47bd-9c97-84da09c9c727-logs\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.419747 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.419905 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.419963 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crq45\" (UniqueName: \"kubernetes.io/projected/1c84f744-8279-47bd-9c97-84da09c9c727-kube-api-access-crq45\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.420003 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c84f744-8279-47bd-9c97-84da09c9c727-logs\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.420053 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-config-data\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.422503 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c84f744-8279-47bd-9c97-84da09c9c727-logs\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.425789 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-config-data\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.436202 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.438621 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.440355 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crq45\" (UniqueName: \"kubernetes.io/projected/1c84f744-8279-47bd-9c97-84da09c9c727-kube-api-access-crq45\") pod \"nova-metadata-0\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.556313 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:53 crc kubenswrapper[4983]: I1125 20:45:53.623040 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0eb0fc0-da93-414d-b10a-eb65c6aeca38" path="/var/lib/kubelet/pods/b0eb0fc0-da93-414d-b10a-eb65c6aeca38/volumes" Nov 25 20:45:54 crc kubenswrapper[4983]: I1125 20:45:54.051379 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:54 crc kubenswrapper[4983]: I1125 20:45:54.123509 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c84f744-8279-47bd-9c97-84da09c9c727","Type":"ContainerStarted","Data":"29c16d5bae94c7abdc8611459deef98a55c4afedff2dc88b5bd4f5be60e2abfe"} Nov 25 20:45:55 crc kubenswrapper[4983]: I1125 20:45:55.144243 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c84f744-8279-47bd-9c97-84da09c9c727","Type":"ContainerStarted","Data":"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0"} Nov 25 20:45:55 crc kubenswrapper[4983]: I1125 20:45:55.144793 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c84f744-8279-47bd-9c97-84da09c9c727","Type":"ContainerStarted","Data":"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4"} Nov 25 20:45:55 crc kubenswrapper[4983]: I1125 20:45:55.181567 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.18152245 podStartE2EDuration="2.18152245s" podCreationTimestamp="2025-11-25 20:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:45:55.16867944 +0000 UTC m=+1136.281212892" watchObservedRunningTime="2025-11-25 20:45:55.18152245 +0000 UTC m=+1136.294055852" Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.155879 4983 generic.go:334] "Generic (PLEG): container finished" podID="4ae5733a-6f6c-40cb-bc80-0110e4549e58" containerID="88b7fc1934a5ad182064fdbcdabaef6361bcfe5a1eb614e2f2c498851f83eb08" exitCode=0 Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.155968 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bhd54" event={"ID":"4ae5733a-6f6c-40cb-bc80-0110e4549e58","Type":"ContainerDied","Data":"88b7fc1934a5ad182064fdbcdabaef6361bcfe5a1eb614e2f2c498851f83eb08"} Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.157393 4983 generic.go:334] "Generic (PLEG): container finished" podID="81bdd9b1-6872-4f3c-afe2-4d403b0db52b" containerID="0f02a57f92677e8980e72f4f15170a2424d745ec975617511dfadab782547771" exitCode=0 Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.157463 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" event={"ID":"81bdd9b1-6872-4f3c-afe2-4d403b0db52b","Type":"ContainerDied","Data":"0f02a57f92677e8980e72f4f15170a2424d745ec975617511dfadab782547771"} Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.627407 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.708673 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.708781 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.976865 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.986597 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 20:45:56 crc kubenswrapper[4983]: I1125 20:45:56.986643 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.044963 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.056128 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-r25m6"] Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.056928 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerName="dnsmasq-dns" containerID="cri-o://b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e" gracePeriod=10 Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.229030 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.724784 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.795748 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.795745 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.835360 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft68d\" (UniqueName: \"kubernetes.io/projected/4ae5733a-6f6c-40cb-bc80-0110e4549e58-kube-api-access-ft68d\") pod \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.835496 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-combined-ca-bundle\") pod \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.835584 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-config-data\") pod \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.835651 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-scripts\") pod \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\" (UID: \"4ae5733a-6f6c-40cb-bc80-0110e4549e58\") " Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.851918 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-scripts" (OuterVolumeSpecName: "scripts") pod "4ae5733a-6f6c-40cb-bc80-0110e4549e58" (UID: "4ae5733a-6f6c-40cb-bc80-0110e4549e58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.852020 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ae5733a-6f6c-40cb-bc80-0110e4549e58-kube-api-access-ft68d" (OuterVolumeSpecName: "kube-api-access-ft68d") pod "4ae5733a-6f6c-40cb-bc80-0110e4549e58" (UID: "4ae5733a-6f6c-40cb-bc80-0110e4549e58"). InnerVolumeSpecName "kube-api-access-ft68d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.885215 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ae5733a-6f6c-40cb-bc80-0110e4549e58" (UID: "4ae5733a-6f6c-40cb-bc80-0110e4549e58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.897434 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-config-data" (OuterVolumeSpecName: "config-data") pod "4ae5733a-6f6c-40cb-bc80-0110e4549e58" (UID: "4ae5733a-6f6c-40cb-bc80-0110e4549e58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.909502 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.918085 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.937945 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft68d\" (UniqueName: \"kubernetes.io/projected/4ae5733a-6f6c-40cb-bc80-0110e4549e58-kube-api-access-ft68d\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.937975 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.937987 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:57 crc kubenswrapper[4983]: I1125 20:45:57.937995 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ae5733a-6f6c-40cb-bc80-0110e4549e58-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039292 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-swift-storage-0\") pod \"f3ab4993-1e73-4209-a907-0e4dd00708aa\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039358 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs4zn\" (UniqueName: \"kubernetes.io/projected/f3ab4993-1e73-4209-a907-0e4dd00708aa-kube-api-access-hs4zn\") pod \"f3ab4993-1e73-4209-a907-0e4dd00708aa\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039411 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-config-data\") pod \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039434 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfrkv\" (UniqueName: \"kubernetes.io/projected/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-kube-api-access-gfrkv\") pod \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039510 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-scripts\") pod \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039529 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-sb\") pod \"f3ab4993-1e73-4209-a907-0e4dd00708aa\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039563 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-config\") pod \"f3ab4993-1e73-4209-a907-0e4dd00708aa\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039582 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-combined-ca-bundle\") pod \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\" (UID: \"81bdd9b1-6872-4f3c-afe2-4d403b0db52b\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039657 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-nb\") pod \"f3ab4993-1e73-4209-a907-0e4dd00708aa\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.039705 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-svc\") pod \"f3ab4993-1e73-4209-a907-0e4dd00708aa\" (UID: \"f3ab4993-1e73-4209-a907-0e4dd00708aa\") " Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.049431 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3ab4993-1e73-4209-a907-0e4dd00708aa-kube-api-access-hs4zn" (OuterVolumeSpecName: "kube-api-access-hs4zn") pod "f3ab4993-1e73-4209-a907-0e4dd00708aa" (UID: "f3ab4993-1e73-4209-a907-0e4dd00708aa"). InnerVolumeSpecName "kube-api-access-hs4zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.049907 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-kube-api-access-gfrkv" (OuterVolumeSpecName: "kube-api-access-gfrkv") pod "81bdd9b1-6872-4f3c-afe2-4d403b0db52b" (UID: "81bdd9b1-6872-4f3c-afe2-4d403b0db52b"). InnerVolumeSpecName "kube-api-access-gfrkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.057828 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-scripts" (OuterVolumeSpecName: "scripts") pod "81bdd9b1-6872-4f3c-afe2-4d403b0db52b" (UID: "81bdd9b1-6872-4f3c-afe2-4d403b0db52b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.074931 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-config-data" (OuterVolumeSpecName: "config-data") pod "81bdd9b1-6872-4f3c-afe2-4d403b0db52b" (UID: "81bdd9b1-6872-4f3c-afe2-4d403b0db52b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.095455 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3ab4993-1e73-4209-a907-0e4dd00708aa" (UID: "f3ab4993-1e73-4209-a907-0e4dd00708aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.099413 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81bdd9b1-6872-4f3c-afe2-4d403b0db52b" (UID: "81bdd9b1-6872-4f3c-afe2-4d403b0db52b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.106448 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3ab4993-1e73-4209-a907-0e4dd00708aa" (UID: "f3ab4993-1e73-4209-a907-0e4dd00708aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.113452 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3ab4993-1e73-4209-a907-0e4dd00708aa" (UID: "f3ab4993-1e73-4209-a907-0e4dd00708aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.120222 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-config" (OuterVolumeSpecName: "config") pod "f3ab4993-1e73-4209-a907-0e4dd00708aa" (UID: "f3ab4993-1e73-4209-a907-0e4dd00708aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.122797 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3ab4993-1e73-4209-a907-0e4dd00708aa" (UID: "f3ab4993-1e73-4209-a907-0e4dd00708aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141225 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141256 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfrkv\" (UniqueName: \"kubernetes.io/projected/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-kube-api-access-gfrkv\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141268 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141276 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141351 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141362 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bdd9b1-6872-4f3c-afe2-4d403b0db52b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141370 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141380 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141388 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3ab4993-1e73-4209-a907-0e4dd00708aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.141398 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs4zn\" (UniqueName: \"kubernetes.io/projected/f3ab4993-1e73-4209-a907-0e4dd00708aa-kube-api-access-hs4zn\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.194548 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-bhd54" event={"ID":"4ae5733a-6f6c-40cb-bc80-0110e4549e58","Type":"ContainerDied","Data":"5bed6dff0443998dc4184cbeb563afe3991176e76eddff13c7356a8dc11d4244"} Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.194615 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bed6dff0443998dc4184cbeb563afe3991176e76eddff13c7356a8dc11d4244" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.194591 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-bhd54" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.196345 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" event={"ID":"81bdd9b1-6872-4f3c-afe2-4d403b0db52b","Type":"ContainerDied","Data":"fdece992a90af702f07d73f7720d60526fddd3188c6f8df1d02154b38dccd818"} Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.196394 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdece992a90af702f07d73f7720d60526fddd3188c6f8df1d02154b38dccd818" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.196471 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwl6b" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.199467 4983 generic.go:334] "Generic (PLEG): container finished" podID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerID="b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e" exitCode=0 Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.199516 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.199739 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" event={"ID":"f3ab4993-1e73-4209-a907-0e4dd00708aa","Type":"ContainerDied","Data":"b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e"} Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.199914 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-r25m6" event={"ID":"f3ab4993-1e73-4209-a907-0e4dd00708aa","Type":"ContainerDied","Data":"7826a96d18780ce9a07f310ca1c938d146bb5cc6dbce83c0590eb79cef8f8a4a"} Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.199960 4983 scope.go:117] "RemoveContainer" containerID="b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.284884 4983 scope.go:117] "RemoveContainer" containerID="844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.285272 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 20:45:58 crc kubenswrapper[4983]: E1125 20:45:58.285919 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ae5733a-6f6c-40cb-bc80-0110e4549e58" containerName="nova-manage" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.285942 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ae5733a-6f6c-40cb-bc80-0110e4549e58" containerName="nova-manage" Nov 25 20:45:58 crc kubenswrapper[4983]: E1125 20:45:58.285956 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerName="dnsmasq-dns" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.285963 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerName="dnsmasq-dns" Nov 25 20:45:58 crc kubenswrapper[4983]: E1125 20:45:58.285978 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerName="init" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.285985 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerName="init" Nov 25 20:45:58 crc kubenswrapper[4983]: E1125 20:45:58.286009 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bdd9b1-6872-4f3c-afe2-4d403b0db52b" containerName="nova-cell1-conductor-db-sync" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.286017 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bdd9b1-6872-4f3c-afe2-4d403b0db52b" containerName="nova-cell1-conductor-db-sync" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.286241 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ae5733a-6f6c-40cb-bc80-0110e4549e58" containerName="nova-manage" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.286257 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bdd9b1-6872-4f3c-afe2-4d403b0db52b" containerName="nova-cell1-conductor-db-sync" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.286282 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" containerName="dnsmasq-dns" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.287227 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.296295 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.296573 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.301753 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-r25m6"] Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.310896 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-r25m6"] Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.336688 4983 scope.go:117] "RemoveContainer" containerID="b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e" Nov 25 20:45:58 crc kubenswrapper[4983]: E1125 20:45:58.337313 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e\": container with ID starting with b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e not found: ID does not exist" containerID="b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.337377 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e"} err="failed to get container status \"b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e\": rpc error: code = NotFound desc = could not find container \"b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e\": container with ID starting with b69a20737daeb2f501a5b30b793255a1fb1b7155e873f4730f2e2ce96e290f1e not found: ID does not exist" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.337416 4983 scope.go:117] "RemoveContainer" containerID="844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a" Nov 25 20:45:58 crc kubenswrapper[4983]: E1125 20:45:58.337965 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a\": container with ID starting with 844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a not found: ID does not exist" containerID="844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.338017 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a"} err="failed to get container status \"844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a\": rpc error: code = NotFound desc = could not find container \"844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a\": container with ID starting with 844723af89d769b6059c4085fdd8a7abf503f2957227e634fdd6b116047c3c0a not found: ID does not exist" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.448803 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb68b08-479a-4831-b6a5-ad478a3922e5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.448952 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb68b08-479a-4831-b6a5-ad478a3922e5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.449094 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfkkc\" (UniqueName: \"kubernetes.io/projected/3eb68b08-479a-4831-b6a5-ad478a3922e5-kube-api-access-hfkkc\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.509496 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.519225 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.519671 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-log" containerID="cri-o://57b7cfd97777b4dbad4036898be860733d61c8bb08087469a90ef4a476dbd8a3" gracePeriod=30 Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.519746 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-api" containerID="cri-o://35ff35dafd0793f881a7ebb86e9bbc554932a1c7e59424da09eff644ce9ff89e" gracePeriod=30 Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.529143 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.529501 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-log" containerID="cri-o://f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4" gracePeriod=30 Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.529550 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-metadata" containerID="cri-o://a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0" gracePeriod=30 Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.551516 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb68b08-479a-4831-b6a5-ad478a3922e5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.551611 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb68b08-479a-4831-b6a5-ad478a3922e5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.551697 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfkkc\" (UniqueName: \"kubernetes.io/projected/3eb68b08-479a-4831-b6a5-ad478a3922e5-kube-api-access-hfkkc\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.556954 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.557013 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.557198 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb68b08-479a-4831-b6a5-ad478a3922e5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.557459 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb68b08-479a-4831-b6a5-ad478a3922e5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.571329 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfkkc\" (UniqueName: \"kubernetes.io/projected/3eb68b08-479a-4831-b6a5-ad478a3922e5-kube-api-access-hfkkc\") pod \"nova-cell1-conductor-0\" (UID: \"3eb68b08-479a-4831-b6a5-ad478a3922e5\") " pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:58 crc kubenswrapper[4983]: I1125 20:45:58.668246 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.138114 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214005 4983 generic.go:334] "Generic (PLEG): container finished" podID="1c84f744-8279-47bd-9c97-84da09c9c727" containerID="a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0" exitCode=0 Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214040 4983 generic.go:334] "Generic (PLEG): container finished" podID="1c84f744-8279-47bd-9c97-84da09c9c727" containerID="f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4" exitCode=143 Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214118 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c84f744-8279-47bd-9c97-84da09c9c727","Type":"ContainerDied","Data":"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0"} Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214149 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c84f744-8279-47bd-9c97-84da09c9c727","Type":"ContainerDied","Data":"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4"} Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214162 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1c84f744-8279-47bd-9c97-84da09c9c727","Type":"ContainerDied","Data":"29c16d5bae94c7abdc8611459deef98a55c4afedff2dc88b5bd4f5be60e2abfe"} Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214215 4983 scope.go:117] "RemoveContainer" containerID="a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.214369 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.230721 4983 generic.go:334] "Generic (PLEG): container finished" podID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerID="57b7cfd97777b4dbad4036898be860733d61c8bb08087469a90ef4a476dbd8a3" exitCode=143 Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.230825 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32608ae3-6acd-4024-ad12-7ed6476db3f1","Type":"ContainerDied","Data":"57b7cfd97777b4dbad4036898be860733d61c8bb08087469a90ef4a476dbd8a3"} Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.230947 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="379634e2-47ea-442d-ac65-ba86166996c8" containerName="nova-scheduler-scheduler" containerID="cri-o://5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" gracePeriod=30 Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.251131 4983 scope.go:117] "RemoveContainer" containerID="f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.257678 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.269497 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crq45\" (UniqueName: \"kubernetes.io/projected/1c84f744-8279-47bd-9c97-84da09c9c727-kube-api-access-crq45\") pod \"1c84f744-8279-47bd-9c97-84da09c9c727\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.269585 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-combined-ca-bundle\") pod \"1c84f744-8279-47bd-9c97-84da09c9c727\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.269626 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-nova-metadata-tls-certs\") pod \"1c84f744-8279-47bd-9c97-84da09c9c727\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.269791 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c84f744-8279-47bd-9c97-84da09c9c727-logs\") pod \"1c84f744-8279-47bd-9c97-84da09c9c727\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.269816 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-config-data\") pod \"1c84f744-8279-47bd-9c97-84da09c9c727\" (UID: \"1c84f744-8279-47bd-9c97-84da09c9c727\") " Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.272082 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c84f744-8279-47bd-9c97-84da09c9c727-logs" (OuterVolumeSpecName: "logs") pod "1c84f744-8279-47bd-9c97-84da09c9c727" (UID: "1c84f744-8279-47bd-9c97-84da09c9c727"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.274741 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c84f744-8279-47bd-9c97-84da09c9c727-kube-api-access-crq45" (OuterVolumeSpecName: "kube-api-access-crq45") pod "1c84f744-8279-47bd-9c97-84da09c9c727" (UID: "1c84f744-8279-47bd-9c97-84da09c9c727"). InnerVolumeSpecName "kube-api-access-crq45". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.299954 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-config-data" (OuterVolumeSpecName: "config-data") pod "1c84f744-8279-47bd-9c97-84da09c9c727" (UID: "1c84f744-8279-47bd-9c97-84da09c9c727"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.322922 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c84f744-8279-47bd-9c97-84da09c9c727" (UID: "1c84f744-8279-47bd-9c97-84da09c9c727"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.347343 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1c84f744-8279-47bd-9c97-84da09c9c727" (UID: "1c84f744-8279-47bd-9c97-84da09c9c727"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.371682 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crq45\" (UniqueName: \"kubernetes.io/projected/1c84f744-8279-47bd-9c97-84da09c9c727-kube-api-access-crq45\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.371718 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.371730 4983 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.371738 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c84f744-8279-47bd-9c97-84da09c9c727-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.371750 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c84f744-8279-47bd-9c97-84da09c9c727-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.436502 4983 scope.go:117] "RemoveContainer" containerID="a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0" Nov 25 20:45:59 crc kubenswrapper[4983]: E1125 20:45:59.437257 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0\": container with ID starting with a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0 not found: ID does not exist" containerID="a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.437299 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0"} err="failed to get container status \"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0\": rpc error: code = NotFound desc = could not find container \"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0\": container with ID starting with a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0 not found: ID does not exist" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.437330 4983 scope.go:117] "RemoveContainer" containerID="f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4" Nov 25 20:45:59 crc kubenswrapper[4983]: E1125 20:45:59.437882 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4\": container with ID starting with f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4 not found: ID does not exist" containerID="f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.437942 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4"} err="failed to get container status \"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4\": rpc error: code = NotFound desc = could not find container \"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4\": container with ID starting with f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4 not found: ID does not exist" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.437981 4983 scope.go:117] "RemoveContainer" containerID="a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.438410 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0"} err="failed to get container status \"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0\": rpc error: code = NotFound desc = could not find container \"a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0\": container with ID starting with a7a258fe2a09a3312bf44e158da3c7dc95304717beab58b17edd6d891fbe4bc0 not found: ID does not exist" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.438471 4983 scope.go:117] "RemoveContainer" containerID="f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.438826 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4"} err="failed to get container status \"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4\": rpc error: code = NotFound desc = could not find container \"f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4\": container with ID starting with f519ee3df4a47c3a5f5f3acf1963f1b7bd9f7fde9902bd7ab0e5164634f2dba4 not found: ID does not exist" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.559284 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.585994 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.596410 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:59 crc kubenswrapper[4983]: E1125 20:45:59.597277 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-log" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.597309 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-log" Nov 25 20:45:59 crc kubenswrapper[4983]: E1125 20:45:59.597337 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-metadata" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.597348 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-metadata" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.597738 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-log" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.597774 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" containerName="nova-metadata-metadata" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.599653 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.608199 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.610667 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.635450 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c84f744-8279-47bd-9c97-84da09c9c727" path="/var/lib/kubelet/pods/1c84f744-8279-47bd-9c97-84da09c9c727/volumes" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.636309 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3ab4993-1e73-4209-a907-0e4dd00708aa" path="/var/lib/kubelet/pods/f3ab4993-1e73-4209-a907-0e4dd00708aa/volumes" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.636878 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.779442 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.779502 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.779533 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-config-data\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.779649 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hdn\" (UniqueName: \"kubernetes.io/projected/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-kube-api-access-v8hdn\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.779748 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-logs\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.881742 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-logs\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.881819 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.882163 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-logs\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.882231 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.882275 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-config-data\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.882367 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8hdn\" (UniqueName: \"kubernetes.io/projected/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-kube-api-access-v8hdn\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.892456 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.896536 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-config-data\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.897077 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.911966 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8hdn\" (UniqueName: \"kubernetes.io/projected/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-kube-api-access-v8hdn\") pod \"nova-metadata-0\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " pod="openstack/nova-metadata-0" Nov 25 20:45:59 crc kubenswrapper[4983]: I1125 20:45:59.932118 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:46:00 crc kubenswrapper[4983]: I1125 20:46:00.241165 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:00 crc kubenswrapper[4983]: W1125 20:46:00.243419 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5c408b_2bfc_4f6a_af31_b877a0e9685f.slice/crio-433e2781cf2a04056f46aed22b6896255405e13ca444152c42f0c60ac67ac8de WatchSource:0}: Error finding container 433e2781cf2a04056f46aed22b6896255405e13ca444152c42f0c60ac67ac8de: Status 404 returned error can't find the container with id 433e2781cf2a04056f46aed22b6896255405e13ca444152c42f0c60ac67ac8de Nov 25 20:46:00 crc kubenswrapper[4983]: I1125 20:46:00.245682 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3eb68b08-479a-4831-b6a5-ad478a3922e5","Type":"ContainerStarted","Data":"fc6cd97ac63f87604986f5089f9a7e3442b65c48d28fc1b3bcd39f5072a50914"} Nov 25 20:46:00 crc kubenswrapper[4983]: I1125 20:46:00.245716 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3eb68b08-479a-4831-b6a5-ad478a3922e5","Type":"ContainerStarted","Data":"0eb9074131ac5da6dc5f4eecbdcded3b27146280f3f812848be3d169adfaac19"} Nov 25 20:46:00 crc kubenswrapper[4983]: I1125 20:46:00.246841 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 20:46:00 crc kubenswrapper[4983]: I1125 20:46:00.279518 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.279481404 podStartE2EDuration="2.279481404s" podCreationTimestamp="2025-11-25 20:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:00.27709669 +0000 UTC m=+1141.389630102" watchObservedRunningTime="2025-11-25 20:46:00.279481404 +0000 UTC m=+1141.392014806" Nov 25 20:46:01 crc kubenswrapper[4983]: I1125 20:46:01.262077 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b5c408b-2bfc-4f6a-af31-b877a0e9685f","Type":"ContainerStarted","Data":"4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d"} Nov 25 20:46:01 crc kubenswrapper[4983]: I1125 20:46:01.263837 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b5c408b-2bfc-4f6a-af31-b877a0e9685f","Type":"ContainerStarted","Data":"2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9"} Nov 25 20:46:01 crc kubenswrapper[4983]: I1125 20:46:01.265322 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b5c408b-2bfc-4f6a-af31-b877a0e9685f","Type":"ContainerStarted","Data":"433e2781cf2a04056f46aed22b6896255405e13ca444152c42f0c60ac67ac8de"} Nov 25 20:46:01 crc kubenswrapper[4983]: I1125 20:46:01.321075 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.321050341 podStartE2EDuration="2.321050341s" podCreationTimestamp="2025-11-25 20:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:01.305773037 +0000 UTC m=+1142.418306439" watchObservedRunningTime="2025-11-25 20:46:01.321050341 +0000 UTC m=+1142.433583753" Nov 25 20:46:01 crc kubenswrapper[4983]: E1125 20:46:01.988673 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 20:46:01 crc kubenswrapper[4983]: E1125 20:46:01.990477 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 20:46:01 crc kubenswrapper[4983]: E1125 20:46:01.992930 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 20:46:01 crc kubenswrapper[4983]: E1125 20:46:01.992999 4983 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="379634e2-47ea-442d-ac65-ba86166996c8" containerName="nova-scheduler-scheduler" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.172466 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.256427 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-config-data\") pod \"379634e2-47ea-442d-ac65-ba86166996c8\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.256742 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md5vk\" (UniqueName: \"kubernetes.io/projected/379634e2-47ea-442d-ac65-ba86166996c8-kube-api-access-md5vk\") pod \"379634e2-47ea-442d-ac65-ba86166996c8\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.257055 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-combined-ca-bundle\") pod \"379634e2-47ea-442d-ac65-ba86166996c8\" (UID: \"379634e2-47ea-442d-ac65-ba86166996c8\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.268466 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379634e2-47ea-442d-ac65-ba86166996c8-kube-api-access-md5vk" (OuterVolumeSpecName: "kube-api-access-md5vk") pod "379634e2-47ea-442d-ac65-ba86166996c8" (UID: "379634e2-47ea-442d-ac65-ba86166996c8"). InnerVolumeSpecName "kube-api-access-md5vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.291483 4983 generic.go:334] "Generic (PLEG): container finished" podID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerID="35ff35dafd0793f881a7ebb86e9bbc554932a1c7e59424da09eff644ce9ff89e" exitCode=0 Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.291667 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32608ae3-6acd-4024-ad12-7ed6476db3f1","Type":"ContainerDied","Data":"35ff35dafd0793f881a7ebb86e9bbc554932a1c7e59424da09eff644ce9ff89e"} Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.296351 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "379634e2-47ea-442d-ac65-ba86166996c8" (UID: "379634e2-47ea-442d-ac65-ba86166996c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.297952 4983 generic.go:334] "Generic (PLEG): container finished" podID="379634e2-47ea-442d-ac65-ba86166996c8" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" exitCode=0 Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.298445 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"379634e2-47ea-442d-ac65-ba86166996c8","Type":"ContainerDied","Data":"5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6"} Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.298488 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.298511 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"379634e2-47ea-442d-ac65-ba86166996c8","Type":"ContainerDied","Data":"f492f1ade8e106cc1a90389aa9fe1af2fbac5c41bb5b50bac2d3b10c00d636b5"} Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.298537 4983 scope.go:117] "RemoveContainer" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.335041 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-config-data" (OuterVolumeSpecName: "config-data") pod "379634e2-47ea-442d-ac65-ba86166996c8" (UID: "379634e2-47ea-442d-ac65-ba86166996c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.361362 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.361402 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379634e2-47ea-442d-ac65-ba86166996c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.361429 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md5vk\" (UniqueName: \"kubernetes.io/projected/379634e2-47ea-442d-ac65-ba86166996c8-kube-api-access-md5vk\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.376760 4983 scope.go:117] "RemoveContainer" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" Nov 25 20:46:03 crc kubenswrapper[4983]: E1125 20:46:03.377446 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6\": container with ID starting with 5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6 not found: ID does not exist" containerID="5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.377482 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6"} err="failed to get container status \"5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6\": rpc error: code = NotFound desc = could not find container \"5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6\": container with ID starting with 5419b0c6e72ee5ad001f87570951a1376b526b811066b3f99926ad246bafdbb6 not found: ID does not exist" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.379313 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.462375 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-combined-ca-bundle\") pod \"32608ae3-6acd-4024-ad12-7ed6476db3f1\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.462520 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32608ae3-6acd-4024-ad12-7ed6476db3f1-logs\") pod \"32608ae3-6acd-4024-ad12-7ed6476db3f1\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.462549 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5kzp\" (UniqueName: \"kubernetes.io/projected/32608ae3-6acd-4024-ad12-7ed6476db3f1-kube-api-access-z5kzp\") pod \"32608ae3-6acd-4024-ad12-7ed6476db3f1\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.462641 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-config-data\") pod \"32608ae3-6acd-4024-ad12-7ed6476db3f1\" (UID: \"32608ae3-6acd-4024-ad12-7ed6476db3f1\") " Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.463378 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32608ae3-6acd-4024-ad12-7ed6476db3f1-logs" (OuterVolumeSpecName: "logs") pod "32608ae3-6acd-4024-ad12-7ed6476db3f1" (UID: "32608ae3-6acd-4024-ad12-7ed6476db3f1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.467955 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32608ae3-6acd-4024-ad12-7ed6476db3f1-kube-api-access-z5kzp" (OuterVolumeSpecName: "kube-api-access-z5kzp") pod "32608ae3-6acd-4024-ad12-7ed6476db3f1" (UID: "32608ae3-6acd-4024-ad12-7ed6476db3f1"). InnerVolumeSpecName "kube-api-access-z5kzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.492679 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32608ae3-6acd-4024-ad12-7ed6476db3f1" (UID: "32608ae3-6acd-4024-ad12-7ed6476db3f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.510777 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-config-data" (OuterVolumeSpecName: "config-data") pod "32608ae3-6acd-4024-ad12-7ed6476db3f1" (UID: "32608ae3-6acd-4024-ad12-7ed6476db3f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.566464 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.566527 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32608ae3-6acd-4024-ad12-7ed6476db3f1-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.566548 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5kzp\" (UniqueName: \"kubernetes.io/projected/32608ae3-6acd-4024-ad12-7ed6476db3f1-kube-api-access-z5kzp\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.566596 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32608ae3-6acd-4024-ad12-7ed6476db3f1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.643118 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.656538 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.678826 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:03 crc kubenswrapper[4983]: E1125 20:46:03.679311 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-log" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.679338 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-log" Nov 25 20:46:03 crc kubenswrapper[4983]: E1125 20:46:03.679361 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379634e2-47ea-442d-ac65-ba86166996c8" containerName="nova-scheduler-scheduler" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.679370 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="379634e2-47ea-442d-ac65-ba86166996c8" containerName="nova-scheduler-scheduler" Nov 25 20:46:03 crc kubenswrapper[4983]: E1125 20:46:03.679396 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-api" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.679404 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-api" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.679642 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-log" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.679677 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="379634e2-47ea-442d-ac65-ba86166996c8" containerName="nova-scheduler-scheduler" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.679691 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" containerName="nova-api-api" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.680395 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.683523 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.697003 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.770020 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.770230 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hflk\" (UniqueName: \"kubernetes.io/projected/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-kube-api-access-8hflk\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.770266 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-config-data\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.873742 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hflk\" (UniqueName: \"kubernetes.io/projected/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-kube-api-access-8hflk\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.873812 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-config-data\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.873863 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.880279 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.882261 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-config-data\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:03 crc kubenswrapper[4983]: I1125 20:46:03.917322 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hflk\" (UniqueName: \"kubernetes.io/projected/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-kube-api-access-8hflk\") pod \"nova-scheduler-0\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.012342 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.313423 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32608ae3-6acd-4024-ad12-7ed6476db3f1","Type":"ContainerDied","Data":"7e2c3c0d2334e282cfe36eb69d72d0c3abca1a778a216b3cea877c2ab46b4490"} Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.314050 4983 scope.go:117] "RemoveContainer" containerID="35ff35dafd0793f881a7ebb86e9bbc554932a1c7e59424da09eff644ce9ff89e" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.313479 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.343658 4983 scope.go:117] "RemoveContainer" containerID="57b7cfd97777b4dbad4036898be860733d61c8bb08087469a90ef4a476dbd8a3" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.343719 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.353253 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.371864 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.378913 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.381222 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.383134 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.503979 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-config-data\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.504051 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbgv\" (UniqueName: \"kubernetes.io/projected/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-kube-api-access-8gbgv\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.504196 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.504239 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-logs\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.567655 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.608132 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-config-data\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.608175 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbgv\" (UniqueName: \"kubernetes.io/projected/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-kube-api-access-8gbgv\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.608254 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.608281 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-logs\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.609500 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-logs\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.612673 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-config-data\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.613026 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.623605 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbgv\" (UniqueName: \"kubernetes.io/projected/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-kube-api-access-8gbgv\") pod \"nova-api-0\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.702205 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.932739 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 20:46:04 crc kubenswrapper[4983]: I1125 20:46:04.932786 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.233910 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.334946 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1","Type":"ContainerStarted","Data":"6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912"} Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.335029 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1","Type":"ContainerStarted","Data":"325e1aa7f7885d946b915bf9940324ec9b7fdaf6ea4eab732d7cb9771bc5cc41"} Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.345177 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8a55fb6-5054-4e18-8125-e7e4f7e2009e","Type":"ContainerStarted","Data":"fdf7a10a3b7e44c71cd27064fb39b2010118a4f2d29caa6eac95e5a0f7d9972f"} Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.368023 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.367997415 podStartE2EDuration="2.367997415s" podCreationTimestamp="2025-11-25 20:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:05.361603056 +0000 UTC m=+1146.474136448" watchObservedRunningTime="2025-11-25 20:46:05.367997415 +0000 UTC m=+1146.480530807" Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.626135 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32608ae3-6acd-4024-ad12-7ed6476db3f1" path="/var/lib/kubelet/pods/32608ae3-6acd-4024-ad12-7ed6476db3f1/volumes" Nov 25 20:46:05 crc kubenswrapper[4983]: I1125 20:46:05.627735 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379634e2-47ea-442d-ac65-ba86166996c8" path="/var/lib/kubelet/pods/379634e2-47ea-442d-ac65-ba86166996c8/volumes" Nov 25 20:46:06 crc kubenswrapper[4983]: I1125 20:46:06.359727 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8a55fb6-5054-4e18-8125-e7e4f7e2009e","Type":"ContainerStarted","Data":"2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029"} Nov 25 20:46:06 crc kubenswrapper[4983]: I1125 20:46:06.361829 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8a55fb6-5054-4e18-8125-e7e4f7e2009e","Type":"ContainerStarted","Data":"782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491"} Nov 25 20:46:06 crc kubenswrapper[4983]: I1125 20:46:06.403055 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.403031591 podStartE2EDuration="2.403031591s" podCreationTimestamp="2025-11-25 20:46:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:06.39846087 +0000 UTC m=+1147.510994342" watchObservedRunningTime="2025-11-25 20:46:06.403031591 +0000 UTC m=+1147.515564993" Nov 25 20:46:07 crc kubenswrapper[4983]: I1125 20:46:07.173514 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 20:46:08 crc kubenswrapper[4983]: I1125 20:46:08.719891 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 20:46:09 crc kubenswrapper[4983]: I1125 20:46:09.013283 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 20:46:09 crc kubenswrapper[4983]: I1125 20:46:09.933046 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 20:46:09 crc kubenswrapper[4983]: I1125 20:46:09.933119 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 20:46:10 crc kubenswrapper[4983]: I1125 20:46:10.942715 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 20:46:10 crc kubenswrapper[4983]: I1125 20:46:10.942783 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 20:46:12 crc kubenswrapper[4983]: I1125 20:46:12.970154 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:46:12 crc kubenswrapper[4983]: I1125 20:46:12.970873 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e9ce7970-111c-43db-81e7-1ee52d40718b" containerName="kube-state-metrics" containerID="cri-o://030e5276954d403bcf167c102210d492a52de96aed5bafbf27bd6e9fb2a09633" gracePeriod=30 Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.435204 4983 generic.go:334] "Generic (PLEG): container finished" podID="e9ce7970-111c-43db-81e7-1ee52d40718b" containerID="030e5276954d403bcf167c102210d492a52de96aed5bafbf27bd6e9fb2a09633" exitCode=2 Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.435359 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9ce7970-111c-43db-81e7-1ee52d40718b","Type":"ContainerDied","Data":"030e5276954d403bcf167c102210d492a52de96aed5bafbf27bd6e9fb2a09633"} Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.435715 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e9ce7970-111c-43db-81e7-1ee52d40718b","Type":"ContainerDied","Data":"ddb4ce3186574ec79bee76d6abe6b18330dd71632ac7b047fdddee9dab2eb2ea"} Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.435741 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb4ce3186574ec79bee76d6abe6b18330dd71632ac7b047fdddee9dab2eb2ea" Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.524164 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.631768 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hll7s\" (UniqueName: \"kubernetes.io/projected/e9ce7970-111c-43db-81e7-1ee52d40718b-kube-api-access-hll7s\") pod \"e9ce7970-111c-43db-81e7-1ee52d40718b\" (UID: \"e9ce7970-111c-43db-81e7-1ee52d40718b\") " Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.639611 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9ce7970-111c-43db-81e7-1ee52d40718b-kube-api-access-hll7s" (OuterVolumeSpecName: "kube-api-access-hll7s") pod "e9ce7970-111c-43db-81e7-1ee52d40718b" (UID: "e9ce7970-111c-43db-81e7-1ee52d40718b"). InnerVolumeSpecName "kube-api-access-hll7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:13 crc kubenswrapper[4983]: I1125 20:46:13.737419 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hll7s\" (UniqueName: \"kubernetes.io/projected/e9ce7970-111c-43db-81e7-1ee52d40718b-kube-api-access-hll7s\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.012915 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.044266 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.446222 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.483883 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.483999 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.494614 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.512398 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:46:14 crc kubenswrapper[4983]: E1125 20:46:14.513025 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9ce7970-111c-43db-81e7-1ee52d40718b" containerName="kube-state-metrics" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.513050 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9ce7970-111c-43db-81e7-1ee52d40718b" containerName="kube-state-metrics" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.513287 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9ce7970-111c-43db-81e7-1ee52d40718b" containerName="kube-state-metrics" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.514205 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.527776 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.557586 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.557943 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.559368 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.559479 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.559508 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twm9l\" (UniqueName: \"kubernetes.io/projected/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-api-access-twm9l\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.559561 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.567258 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.567624 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="proxy-httpd" containerID="cri-o://a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae" gracePeriod=30 Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.567624 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-central-agent" containerID="cri-o://acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3" gracePeriod=30 Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.567850 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-notification-agent" containerID="cri-o://da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141" gracePeriod=30 Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.567896 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="sg-core" containerID="cri-o://4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7" gracePeriod=30 Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.661351 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.661800 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twm9l\" (UniqueName: \"kubernetes.io/projected/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-api-access-twm9l\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.661845 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.662205 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.667197 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.679232 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.680862 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae259426-d08e-4d8f-b3e7-f06847f1c2da-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.685746 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twm9l\" (UniqueName: \"kubernetes.io/projected/ae259426-d08e-4d8f-b3e7-f06847f1c2da-kube-api-access-twm9l\") pod \"kube-state-metrics-0\" (UID: \"ae259426-d08e-4d8f-b3e7-f06847f1c2da\") " pod="openstack/kube-state-metrics-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.704180 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.704221 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:46:14 crc kubenswrapper[4983]: I1125 20:46:14.881670 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 20:46:15 crc kubenswrapper[4983]: W1125 20:46:15.377035 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae259426_d08e_4d8f_b3e7_f06847f1c2da.slice/crio-cc5b9e561a0e058469b8d0bec6c1029f90509c829a75df395ccda27e3a3bcc61 WatchSource:0}: Error finding container cc5b9e561a0e058469b8d0bec6c1029f90509c829a75df395ccda27e3a3bcc61: Status 404 returned error can't find the container with id cc5b9e561a0e058469b8d0bec6c1029f90509c829a75df395ccda27e3a3bcc61 Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.377128 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.459872 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae259426-d08e-4d8f-b3e7-f06847f1c2da","Type":"ContainerStarted","Data":"cc5b9e561a0e058469b8d0bec6c1029f90509c829a75df395ccda27e3a3bcc61"} Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.463202 4983 generic.go:334] "Generic (PLEG): container finished" podID="0a71be43-125f-433d-8c68-9632f83b55f0" containerID="a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae" exitCode=0 Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.463233 4983 generic.go:334] "Generic (PLEG): container finished" podID="0a71be43-125f-433d-8c68-9632f83b55f0" containerID="4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7" exitCode=2 Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.463241 4983 generic.go:334] "Generic (PLEG): container finished" podID="0a71be43-125f-433d-8c68-9632f83b55f0" containerID="acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3" exitCode=0 Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.463281 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerDied","Data":"a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae"} Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.463313 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerDied","Data":"4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7"} Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.463324 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerDied","Data":"acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3"} Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.616489 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9ce7970-111c-43db-81e7-1ee52d40718b" path="/var/lib/kubelet/pods/e9ce7970-111c-43db-81e7-1ee52d40718b/volumes" Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.786878 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 20:46:15 crc kubenswrapper[4983]: I1125 20:46:15.787000 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.135647 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.190812 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-config-data\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.191067 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdlbp\" (UniqueName: \"kubernetes.io/projected/0a71be43-125f-433d-8c68-9632f83b55f0-kube-api-access-hdlbp\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.191128 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-run-httpd\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.191187 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-sg-core-conf-yaml\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.191222 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-combined-ca-bundle\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.191244 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-scripts\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.191351 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-log-httpd\") pod \"0a71be43-125f-433d-8c68-9632f83b55f0\" (UID: \"0a71be43-125f-433d-8c68-9632f83b55f0\") " Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.192133 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.192479 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.204122 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-scripts" (OuterVolumeSpecName: "scripts") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.205416 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a71be43-125f-433d-8c68-9632f83b55f0-kube-api-access-hdlbp" (OuterVolumeSpecName: "kube-api-access-hdlbp") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "kube-api-access-hdlbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.227587 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.284981 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.294198 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.294230 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdlbp\" (UniqueName: \"kubernetes.io/projected/0a71be43-125f-433d-8c68-9632f83b55f0-kube-api-access-hdlbp\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.294243 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a71be43-125f-433d-8c68-9632f83b55f0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.294252 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.294260 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.294268 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.329902 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-config-data" (OuterVolumeSpecName: "config-data") pod "0a71be43-125f-433d-8c68-9632f83b55f0" (UID: "0a71be43-125f-433d-8c68-9632f83b55f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.395899 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a71be43-125f-433d-8c68-9632f83b55f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.475677 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae259426-d08e-4d8f-b3e7-f06847f1c2da","Type":"ContainerStarted","Data":"b3988a1ec56ae493d08f23b66f3a2feb37029e37c15d737df8d6277f5f09804d"} Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.475849 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.484429 4983 generic.go:334] "Generic (PLEG): container finished" podID="0a71be43-125f-433d-8c68-9632f83b55f0" containerID="da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141" exitCode=0 Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.484584 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.484548 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerDied","Data":"da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141"} Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.484918 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a71be43-125f-433d-8c68-9632f83b55f0","Type":"ContainerDied","Data":"55e84bba8fbbe83190b56a76a78b6cb4223a5b80d7f0e99e0581c3dc188e5cdf"} Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.484973 4983 scope.go:117] "RemoveContainer" containerID="a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.512271 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.149910676 podStartE2EDuration="2.512249649s" podCreationTimestamp="2025-11-25 20:46:14 +0000 UTC" firstStartedPulling="2025-11-25 20:46:15.380432002 +0000 UTC m=+1156.492965414" lastFinishedPulling="2025-11-25 20:46:15.742771005 +0000 UTC m=+1156.855304387" observedRunningTime="2025-11-25 20:46:16.502146092 +0000 UTC m=+1157.614679514" watchObservedRunningTime="2025-11-25 20:46:16.512249649 +0000 UTC m=+1157.624783051" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.528682 4983 scope.go:117] "RemoveContainer" containerID="4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.534035 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.543866 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.562695 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.564274 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="proxy-httpd" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564291 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="proxy-httpd" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.564334 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-notification-agent" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564350 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-notification-agent" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.564363 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-central-agent" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564369 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-central-agent" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.564389 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="sg-core" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564395 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="sg-core" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564611 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="proxy-httpd" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564626 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-notification-agent" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564648 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="sg-core" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.564660 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" containerName="ceilometer-central-agent" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.567385 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.574073 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.574120 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.574288 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.585722 4983 scope.go:117] "RemoveContainer" containerID="da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.589161 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.610749 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-log-httpd\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.610816 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.611011 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-scripts\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.611086 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-run-httpd\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.611168 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-config-data\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.611190 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.611213 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.611236 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcw6p\" (UniqueName: \"kubernetes.io/projected/485909ac-581c-4569-9200-a5f22e7e417c-kube-api-access-wcw6p\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.624737 4983 scope.go:117] "RemoveContainer" containerID="acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.650539 4983 scope.go:117] "RemoveContainer" containerID="a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.651210 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae\": container with ID starting with a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae not found: ID does not exist" containerID="a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.651256 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae"} err="failed to get container status \"a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae\": rpc error: code = NotFound desc = could not find container \"a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae\": container with ID starting with a8e0a02d421254675d0d65e031eace821b0799548ab9548d9e0ef9f655c040ae not found: ID does not exist" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.651283 4983 scope.go:117] "RemoveContainer" containerID="4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.651692 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7\": container with ID starting with 4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7 not found: ID does not exist" containerID="4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.651719 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7"} err="failed to get container status \"4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7\": rpc error: code = NotFound desc = could not find container \"4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7\": container with ID starting with 4027b77c0c1c39dcb5045d74d13a5b9f0cc19665c136a25c3f9cde8c8cc013c7 not found: ID does not exist" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.651737 4983 scope.go:117] "RemoveContainer" containerID="da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.652185 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141\": container with ID starting with da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141 not found: ID does not exist" containerID="da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.652213 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141"} err="failed to get container status \"da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141\": rpc error: code = NotFound desc = could not find container \"da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141\": container with ID starting with da48e32d3404a06afa5c8dea89c31c1c6cc195aa8c31a6c1deda8b69f2353141 not found: ID does not exist" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.652227 4983 scope.go:117] "RemoveContainer" containerID="acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3" Nov 25 20:46:16 crc kubenswrapper[4983]: E1125 20:46:16.652664 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3\": container with ID starting with acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3 not found: ID does not exist" containerID="acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.652872 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3"} err="failed to get container status \"acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3\": rpc error: code = NotFound desc = could not find container \"acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3\": container with ID starting with acc6c47943c49acf79c080b0a1613aac536bdf8e03c199930b03355e987b17c3 not found: ID does not exist" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713468 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-log-httpd\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713524 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713675 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-scripts\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713724 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-run-httpd\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713771 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-config-data\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713840 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713865 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.713900 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcw6p\" (UniqueName: \"kubernetes.io/projected/485909ac-581c-4569-9200-a5f22e7e417c-kube-api-access-wcw6p\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.714698 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-run-httpd\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.715009 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-log-httpd\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.722838 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-scripts\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.727768 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.729189 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.732055 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-config-data\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.735808 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.737054 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcw6p\" (UniqueName: \"kubernetes.io/projected/485909ac-581c-4569-9200-a5f22e7e417c-kube-api-access-wcw6p\") pod \"ceilometer-0\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " pod="openstack/ceilometer-0" Nov 25 20:46:16 crc kubenswrapper[4983]: I1125 20:46:16.896492 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:17 crc kubenswrapper[4983]: I1125 20:46:17.434945 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:17 crc kubenswrapper[4983]: W1125 20:46:17.440655 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod485909ac_581c_4569_9200_a5f22e7e417c.slice/crio-0ab04561172c4b5ccd7425affda7e33ee835e2a269a4181f83ff6a76aed0b2a6 WatchSource:0}: Error finding container 0ab04561172c4b5ccd7425affda7e33ee835e2a269a4181f83ff6a76aed0b2a6: Status 404 returned error can't find the container with id 0ab04561172c4b5ccd7425affda7e33ee835e2a269a4181f83ff6a76aed0b2a6 Nov 25 20:46:17 crc kubenswrapper[4983]: I1125 20:46:17.504611 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerStarted","Data":"0ab04561172c4b5ccd7425affda7e33ee835e2a269a4181f83ff6a76aed0b2a6"} Nov 25 20:46:17 crc kubenswrapper[4983]: I1125 20:46:17.627280 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a71be43-125f-433d-8c68-9632f83b55f0" path="/var/lib/kubelet/pods/0a71be43-125f-433d-8c68-9632f83b55f0/volumes" Nov 25 20:46:18 crc kubenswrapper[4983]: I1125 20:46:18.516823 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerStarted","Data":"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2"} Nov 25 20:46:19 crc kubenswrapper[4983]: I1125 20:46:19.551356 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerStarted","Data":"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790"} Nov 25 20:46:19 crc kubenswrapper[4983]: I1125 20:46:19.939804 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 20:46:19 crc kubenswrapper[4983]: I1125 20:46:19.941779 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 20:46:19 crc kubenswrapper[4983]: I1125 20:46:19.946818 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 20:46:20 crc kubenswrapper[4983]: I1125 20:46:20.566903 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerStarted","Data":"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e"} Nov 25 20:46:20 crc kubenswrapper[4983]: I1125 20:46:20.577117 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 20:46:21 crc kubenswrapper[4983]: I1125 20:46:21.581574 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerStarted","Data":"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14"} Nov 25 20:46:21 crc kubenswrapper[4983]: I1125 20:46:21.583458 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:46:21 crc kubenswrapper[4983]: I1125 20:46:21.648479 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.930928824 podStartE2EDuration="5.648458495s" podCreationTimestamp="2025-11-25 20:46:16 +0000 UTC" firstStartedPulling="2025-11-25 20:46:17.444851673 +0000 UTC m=+1158.557385065" lastFinishedPulling="2025-11-25 20:46:21.162381344 +0000 UTC m=+1162.274914736" observedRunningTime="2025-11-25 20:46:21.63620784 +0000 UTC m=+1162.748741252" watchObservedRunningTime="2025-11-25 20:46:21.648458495 +0000 UTC m=+1162.760991887" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.573524 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.599403 4983 generic.go:334] "Generic (PLEG): container finished" podID="3173a912-98b8-4681-88a3-3903ad98a52d" containerID="2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4" exitCode=137 Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.600719 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.601238 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3173a912-98b8-4681-88a3-3903ad98a52d","Type":"ContainerDied","Data":"2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4"} Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.601275 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3173a912-98b8-4681-88a3-3903ad98a52d","Type":"ContainerDied","Data":"a233626f20f423bbf7abf5d6e3b7888c5b74e6dc3ee6bb7b3bcafe0b9c999ec0"} Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.601297 4983 scope.go:117] "RemoveContainer" containerID="2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.639429 4983 scope.go:117] "RemoveContainer" containerID="2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4" Nov 25 20:46:22 crc kubenswrapper[4983]: E1125 20:46:22.640050 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4\": container with ID starting with 2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4 not found: ID does not exist" containerID="2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.640106 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4"} err="failed to get container status \"2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4\": rpc error: code = NotFound desc = could not find container \"2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4\": container with ID starting with 2a7b0c5b609b3e2de54d9905f7c5853bf8de7aacf6c15d1fbd71c5d80aa266d4 not found: ID does not exist" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.664866 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-combined-ca-bundle\") pod \"3173a912-98b8-4681-88a3-3903ad98a52d\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.694338 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3173a912-98b8-4681-88a3-3903ad98a52d" (UID: "3173a912-98b8-4681-88a3-3903ad98a52d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.766533 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7f69\" (UniqueName: \"kubernetes.io/projected/3173a912-98b8-4681-88a3-3903ad98a52d-kube-api-access-p7f69\") pod \"3173a912-98b8-4681-88a3-3903ad98a52d\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.766778 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-config-data\") pod \"3173a912-98b8-4681-88a3-3903ad98a52d\" (UID: \"3173a912-98b8-4681-88a3-3903ad98a52d\") " Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.767073 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.770434 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3173a912-98b8-4681-88a3-3903ad98a52d-kube-api-access-p7f69" (OuterVolumeSpecName: "kube-api-access-p7f69") pod "3173a912-98b8-4681-88a3-3903ad98a52d" (UID: "3173a912-98b8-4681-88a3-3903ad98a52d"). InnerVolumeSpecName "kube-api-access-p7f69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.804498 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-config-data" (OuterVolumeSpecName: "config-data") pod "3173a912-98b8-4681-88a3-3903ad98a52d" (UID: "3173a912-98b8-4681-88a3-3903ad98a52d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.868582 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3173a912-98b8-4681-88a3-3903ad98a52d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.868620 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7f69\" (UniqueName: \"kubernetes.io/projected/3173a912-98b8-4681-88a3-3903ad98a52d-kube-api-access-p7f69\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.935204 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.943027 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.967721 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:46:22 crc kubenswrapper[4983]: E1125 20:46:22.968171 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3173a912-98b8-4681-88a3-3903ad98a52d" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.968190 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="3173a912-98b8-4681-88a3-3903ad98a52d" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.968398 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="3173a912-98b8-4681-88a3-3903ad98a52d" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.969045 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.970255 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.970296 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.970376 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.970411 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.970444 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665vb\" (UniqueName: \"kubernetes.io/projected/968ee4da-4360-486b-a70a-a805a19a6b42-kube-api-access-665vb\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.972989 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.974851 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.975063 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 20:46:22 crc kubenswrapper[4983]: I1125 20:46:22.979457 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.073047 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.073320 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.073526 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.073690 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.073902 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-665vb\" (UniqueName: \"kubernetes.io/projected/968ee4da-4360-486b-a70a-a805a19a6b42-kube-api-access-665vb\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.079404 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.080184 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.081013 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.083440 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/968ee4da-4360-486b-a70a-a805a19a6b42-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.098271 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-665vb\" (UniqueName: \"kubernetes.io/projected/968ee4da-4360-486b-a70a-a805a19a6b42-kube-api-access-665vb\") pod \"nova-cell1-novncproxy-0\" (UID: \"968ee4da-4360-486b-a70a-a805a19a6b42\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.294489 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.616645 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3173a912-98b8-4681-88a3-3903ad98a52d" path="/var/lib/kubelet/pods/3173a912-98b8-4681-88a3-3903ad98a52d/volumes" Nov 25 20:46:23 crc kubenswrapper[4983]: I1125 20:46:23.798223 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 20:46:23 crc kubenswrapper[4983]: W1125 20:46:23.806280 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod968ee4da_4360_486b_a70a_a805a19a6b42.slice/crio-12037c63e35b5572f6fb84843b25962b480f6344bc20c4f95af2cd166f069d58 WatchSource:0}: Error finding container 12037c63e35b5572f6fb84843b25962b480f6344bc20c4f95af2cd166f069d58: Status 404 returned error can't find the container with id 12037c63e35b5572f6fb84843b25962b480f6344bc20c4f95af2cd166f069d58 Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.623713 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"968ee4da-4360-486b-a70a-a805a19a6b42","Type":"ContainerStarted","Data":"5f6ee23705b1abe9771551ee8b7ab7402b50de96f7b1f4620804e96e8a04fadc"} Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.624310 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"968ee4da-4360-486b-a70a-a805a19a6b42","Type":"ContainerStarted","Data":"12037c63e35b5572f6fb84843b25962b480f6344bc20c4f95af2cd166f069d58"} Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.650523 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.650493141 podStartE2EDuration="2.650493141s" podCreationTimestamp="2025-11-25 20:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:24.645878289 +0000 UTC m=+1165.758411691" watchObservedRunningTime="2025-11-25 20:46:24.650493141 +0000 UTC m=+1165.763026543" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.708833 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.708986 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.709484 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.709850 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.712913 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.713273 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.944672 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-qntk5"] Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.946326 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.965990 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-qntk5"] Nov 25 20:46:24 crc kubenswrapper[4983]: I1125 20:46:24.969432 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.021961 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qckp\" (UniqueName: \"kubernetes.io/projected/22e37849-509f-4fac-98a9-ab22a28c8c28-kube-api-access-9qckp\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.022104 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.022134 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.022172 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.022237 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.022254 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-config\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.127379 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qckp\" (UniqueName: \"kubernetes.io/projected/22e37849-509f-4fac-98a9-ab22a28c8c28-kube-api-access-9qckp\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.127488 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.127521 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.127592 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.127652 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.127676 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-config\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.128733 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.128750 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.129015 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.129485 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.129486 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-config\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.185425 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qckp\" (UniqueName: \"kubernetes.io/projected/22e37849-509f-4fac-98a9-ab22a28c8c28-kube-api-access-9qckp\") pod \"dnsmasq-dns-89c5cd4d5-qntk5\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.270531 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:25 crc kubenswrapper[4983]: I1125 20:46:25.822168 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-qntk5"] Nov 25 20:46:26 crc kubenswrapper[4983]: I1125 20:46:26.647764 4983 generic.go:334] "Generic (PLEG): container finished" podID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerID="222fb86698ce196e2fab5b14ed821288a57111881c75e6cd182e8e8cece44b16" exitCode=0 Nov 25 20:46:26 crc kubenswrapper[4983]: I1125 20:46:26.647956 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" event={"ID":"22e37849-509f-4fac-98a9-ab22a28c8c28","Type":"ContainerDied","Data":"222fb86698ce196e2fab5b14ed821288a57111881c75e6cd182e8e8cece44b16"} Nov 25 20:46:26 crc kubenswrapper[4983]: I1125 20:46:26.648462 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" event={"ID":"22e37849-509f-4fac-98a9-ab22a28c8c28","Type":"ContainerStarted","Data":"b95b23ede550795d437b4cdfe7a862810f2d9143b65924d90ebc17b25dea309a"} Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.298537 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.299439 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-central-agent" containerID="cri-o://9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2" gracePeriod=30 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.299573 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="proxy-httpd" containerID="cri-o://ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14" gracePeriod=30 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.299569 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="sg-core" containerID="cri-o://9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e" gracePeriod=30 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.299815 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-notification-agent" containerID="cri-o://1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790" gracePeriod=30 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.638459 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.662857 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" event={"ID":"22e37849-509f-4fac-98a9-ab22a28c8c28","Type":"ContainerStarted","Data":"71612eefefe28389b5cb7ada9af1855aea8b04808f0f11856bc1b30fd0ba3fc4"} Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.666674 4983 generic.go:334] "Generic (PLEG): container finished" podID="485909ac-581c-4569-9200-a5f22e7e417c" containerID="ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14" exitCode=0 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.666742 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerDied","Data":"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14"} Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.666849 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerDied","Data":"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e"} Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.666779 4983 generic.go:334] "Generic (PLEG): container finished" podID="485909ac-581c-4569-9200-a5f22e7e417c" containerID="9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e" exitCode=2 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.667242 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-log" containerID="cri-o://782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491" gracePeriod=30 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.667259 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-api" containerID="cri-o://2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029" gracePeriod=30 Nov 25 20:46:27 crc kubenswrapper[4983]: I1125 20:46:27.700232 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" podStartSLOduration=3.700204076 podStartE2EDuration="3.700204076s" podCreationTimestamp="2025-11-25 20:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:27.692678047 +0000 UTC m=+1168.805211509" watchObservedRunningTime="2025-11-25 20:46:27.700204076 +0000 UTC m=+1168.812737468" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.294770 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.297778 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.428889 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-run-httpd\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429025 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-scripts\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429143 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-log-httpd\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429235 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-ceilometer-tls-certs\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429392 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-config-data\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429492 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-sg-core-conf-yaml\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429539 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcw6p\" (UniqueName: \"kubernetes.io/projected/485909ac-581c-4569-9200-a5f22e7e417c-kube-api-access-wcw6p\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429607 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-combined-ca-bundle\") pod \"485909ac-581c-4569-9200-a5f22e7e417c\" (UID: \"485909ac-581c-4569-9200-a5f22e7e417c\") " Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.429873 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.430520 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.430586 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.438284 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-scripts" (OuterVolumeSpecName: "scripts") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.442210 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485909ac-581c-4569-9200-a5f22e7e417c-kube-api-access-wcw6p" (OuterVolumeSpecName: "kube-api-access-wcw6p") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "kube-api-access-wcw6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.470205 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.502586 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.533221 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.535455 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcw6p\" (UniqueName: \"kubernetes.io/projected/485909ac-581c-4569-9200-a5f22e7e417c-kube-api-access-wcw6p\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.535479 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.535492 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/485909ac-581c-4569-9200-a5f22e7e417c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.535503 4983 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.543824 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.588497 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-config-data" (OuterVolumeSpecName: "config-data") pod "485909ac-581c-4569-9200-a5f22e7e417c" (UID: "485909ac-581c-4569-9200-a5f22e7e417c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.639116 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.639162 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485909ac-581c-4569-9200-a5f22e7e417c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.681762 4983 generic.go:334] "Generic (PLEG): container finished" podID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerID="782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491" exitCode=143 Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.681838 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8a55fb6-5054-4e18-8125-e7e4f7e2009e","Type":"ContainerDied","Data":"782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491"} Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.690093 4983 generic.go:334] "Generic (PLEG): container finished" podID="485909ac-581c-4569-9200-a5f22e7e417c" containerID="1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790" exitCode=0 Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.690137 4983 generic.go:334] "Generic (PLEG): container finished" podID="485909ac-581c-4569-9200-a5f22e7e417c" containerID="9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2" exitCode=0 Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.691167 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.691736 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerDied","Data":"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790"} Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.691844 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerDied","Data":"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2"} Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.691873 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.691899 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"485909ac-581c-4569-9200-a5f22e7e417c","Type":"ContainerDied","Data":"0ab04561172c4b5ccd7425affda7e33ee835e2a269a4181f83ff6a76aed0b2a6"} Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.691926 4983 scope.go:117] "RemoveContainer" containerID="ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.721944 4983 scope.go:117] "RemoveContainer" containerID="9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.742290 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.758984 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.774300 4983 scope.go:117] "RemoveContainer" containerID="1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.811239 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.812391 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="proxy-httpd" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.812415 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="proxy-httpd" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.812447 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-notification-agent" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.812454 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-notification-agent" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.812502 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="sg-core" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.812510 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="sg-core" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.812534 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-central-agent" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.812540 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-central-agent" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.813010 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-notification-agent" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.813049 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="ceilometer-central-agent" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.813087 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="proxy-httpd" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.813105 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="485909ac-581c-4569-9200-a5f22e7e417c" containerName="sg-core" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.826081 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.828576 4983 scope.go:117] "RemoveContainer" containerID="9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.830100 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.835839 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.836050 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.843578 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.847423 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.847510 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-run-httpd\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.847531 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-log-httpd\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.847752 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.847844 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-config-data\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.847882 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svgvr\" (UniqueName: \"kubernetes.io/projected/6388d4b7-e90a-42ae-9aa3-a537bccca436-kube-api-access-svgvr\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.848903 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-scripts\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.848967 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.878787 4983 scope.go:117] "RemoveContainer" containerID="ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.880012 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14\": container with ID starting with ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14 not found: ID does not exist" containerID="ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.880055 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14"} err="failed to get container status \"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14\": rpc error: code = NotFound desc = could not find container \"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14\": container with ID starting with ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14 not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.880088 4983 scope.go:117] "RemoveContainer" containerID="9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.881377 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e\": container with ID starting with 9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e not found: ID does not exist" containerID="9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.881444 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e"} err="failed to get container status \"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e\": rpc error: code = NotFound desc = could not find container \"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e\": container with ID starting with 9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.881486 4983 scope.go:117] "RemoveContainer" containerID="1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.882030 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790\": container with ID starting with 1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790 not found: ID does not exist" containerID="1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.882069 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790"} err="failed to get container status \"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790\": rpc error: code = NotFound desc = could not find container \"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790\": container with ID starting with 1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790 not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.882089 4983 scope.go:117] "RemoveContainer" containerID="9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.883582 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2\": container with ID starting with 9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2 not found: ID does not exist" containerID="9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.883643 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2"} err="failed to get container status \"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2\": rpc error: code = NotFound desc = could not find container \"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2\": container with ID starting with 9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2 not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.883681 4983 scope.go:117] "RemoveContainer" containerID="ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.884171 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14"} err="failed to get container status \"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14\": rpc error: code = NotFound desc = could not find container \"ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14\": container with ID starting with ace9b1ea2d4c62764e3207dacf730daf5daa51dd380505e03268e6905a6f0a14 not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.884196 4983 scope.go:117] "RemoveContainer" containerID="9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.884596 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e"} err="failed to get container status \"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e\": rpc error: code = NotFound desc = could not find container \"9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e\": container with ID starting with 9d7367ec3155569cf81796d078818b1c2eae5aab234423e01ab7490cd42ca85e not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.884643 4983 scope.go:117] "RemoveContainer" containerID="1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.885020 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790"} err="failed to get container status \"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790\": rpc error: code = NotFound desc = could not find container \"1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790\": container with ID starting with 1a5e2c42272ece64f550db3a60f3d89766ee90dd3c88a33928671879cac3c790 not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.885041 4983 scope.go:117] "RemoveContainer" containerID="9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.885314 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2"} err="failed to get container status \"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2\": rpc error: code = NotFound desc = could not find container \"9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2\": container with ID starting with 9298632dde3a302d257bd77ba07850eeeacd1b09e7cef7be3a1aa736e68c36e2 not found: ID does not exist" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952376 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-config-data\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952483 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svgvr\" (UniqueName: \"kubernetes.io/projected/6388d4b7-e90a-42ae-9aa3-a537bccca436-kube-api-access-svgvr\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952521 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-scripts\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952621 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952730 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952860 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-run-httpd\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952891 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-log-httpd\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.952975 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.954263 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-run-httpd\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.954632 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-log-httpd\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.956894 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.961788 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.962607 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-scripts\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.963121 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-config-data\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.965207 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:28 crc kubenswrapper[4983]: E1125 20:46:28.970068 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod485909ac_581c_4569_9200_a5f22e7e417c.slice\": RecentStats: unable to find data in memory cache]" Nov 25 20:46:28 crc kubenswrapper[4983]: I1125 20:46:28.976631 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svgvr\" (UniqueName: \"kubernetes.io/projected/6388d4b7-e90a-42ae-9aa3-a537bccca436-kube-api-access-svgvr\") pod \"ceilometer-0\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " pod="openstack/ceilometer-0" Nov 25 20:46:29 crc kubenswrapper[4983]: I1125 20:46:29.164855 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:29 crc kubenswrapper[4983]: I1125 20:46:29.402663 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:29 crc kubenswrapper[4983]: I1125 20:46:29.515302 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:29 crc kubenswrapper[4983]: I1125 20:46:29.628395 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485909ac-581c-4569-9200-a5f22e7e417c" path="/var/lib/kubelet/pods/485909ac-581c-4569-9200-a5f22e7e417c/volumes" Nov 25 20:46:29 crc kubenswrapper[4983]: I1125 20:46:29.702867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerStarted","Data":"85b87bde0e3a36c847be993d80e167930b074ff0cbfce5dec057c86285d11510"} Nov 25 20:46:30 crc kubenswrapper[4983]: I1125 20:46:30.741548 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerStarted","Data":"2e1156b52ae1b410622b51cac3062f7c9f466822489a6255b86510e9a5b6c408"} Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.289139 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.323345 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-logs\") pod \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.323659 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gbgv\" (UniqueName: \"kubernetes.io/projected/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-kube-api-access-8gbgv\") pod \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.323730 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-combined-ca-bundle\") pod \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.323882 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-config-data\") pod \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\" (UID: \"e8a55fb6-5054-4e18-8125-e7e4f7e2009e\") " Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.324251 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-logs" (OuterVolumeSpecName: "logs") pod "e8a55fb6-5054-4e18-8125-e7e4f7e2009e" (UID: "e8a55fb6-5054-4e18-8125-e7e4f7e2009e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.324835 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.342630 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-kube-api-access-8gbgv" (OuterVolumeSpecName: "kube-api-access-8gbgv") pod "e8a55fb6-5054-4e18-8125-e7e4f7e2009e" (UID: "e8a55fb6-5054-4e18-8125-e7e4f7e2009e"). InnerVolumeSpecName "kube-api-access-8gbgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.372693 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-config-data" (OuterVolumeSpecName: "config-data") pod "e8a55fb6-5054-4e18-8125-e7e4f7e2009e" (UID: "e8a55fb6-5054-4e18-8125-e7e4f7e2009e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.378541 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8a55fb6-5054-4e18-8125-e7e4f7e2009e" (UID: "e8a55fb6-5054-4e18-8125-e7e4f7e2009e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.427202 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gbgv\" (UniqueName: \"kubernetes.io/projected/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-kube-api-access-8gbgv\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.427251 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.427443 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a55fb6-5054-4e18-8125-e7e4f7e2009e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.759219 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerStarted","Data":"4084938c90ad7f7d494b1cea55c3a38bc998ed26efe9c1707d782cb81c2692e3"} Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.771969 4983 generic.go:334] "Generic (PLEG): container finished" podID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerID="2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029" exitCode=0 Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.772055 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8a55fb6-5054-4e18-8125-e7e4f7e2009e","Type":"ContainerDied","Data":"2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029"} Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.772104 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e8a55fb6-5054-4e18-8125-e7e4f7e2009e","Type":"ContainerDied","Data":"fdf7a10a3b7e44c71cd27064fb39b2010118a4f2d29caa6eac95e5a0f7d9972f"} Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.772128 4983 scope.go:117] "RemoveContainer" containerID="2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.772439 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.861664 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.865203 4983 scope.go:117] "RemoveContainer" containerID="782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.876525 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.889894 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:31 crc kubenswrapper[4983]: E1125 20:46:31.890587 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-api" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.890605 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-api" Nov 25 20:46:31 crc kubenswrapper[4983]: E1125 20:46:31.890627 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-log" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.890633 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-log" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.890843 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-log" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.890870 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" containerName="nova-api-api" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.892185 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.897889 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.899572 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.899718 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.900019 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.909076 4983 scope.go:117] "RemoveContainer" containerID="2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029" Nov 25 20:46:31 crc kubenswrapper[4983]: E1125 20:46:31.910118 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029\": container with ID starting with 2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029 not found: ID does not exist" containerID="2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.910170 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029"} err="failed to get container status \"2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029\": rpc error: code = NotFound desc = could not find container \"2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029\": container with ID starting with 2bb3898d91910260feb7b0185a9acb402ce8bc5cc7fe7c1fce953fa30cde0029 not found: ID does not exist" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.910209 4983 scope.go:117] "RemoveContainer" containerID="782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491" Nov 25 20:46:31 crc kubenswrapper[4983]: E1125 20:46:31.923097 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491\": container with ID starting with 782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491 not found: ID does not exist" containerID="782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.923146 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491"} err="failed to get container status \"782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491\": rpc error: code = NotFound desc = could not find container \"782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491\": container with ID starting with 782a039c14d90bfc12ba118f0c8942c09443aa3c6450bedd9be2ec21df4e0491 not found: ID does not exist" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.942256 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.942387 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-public-tls-certs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.942463 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npcsm\" (UniqueName: \"kubernetes.io/projected/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-kube-api-access-npcsm\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.942520 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-logs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.942749 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-config-data\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:31 crc kubenswrapper[4983]: I1125 20:46:31.942792 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.045572 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-logs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.045724 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-config-data\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.045754 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.045797 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.045838 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-public-tls-certs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.045867 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npcsm\" (UniqueName: \"kubernetes.io/projected/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-kube-api-access-npcsm\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.046282 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-logs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.054999 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.056346 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-public-tls-certs\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.057048 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-config-data\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.064109 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.064435 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npcsm\" (UniqueName: \"kubernetes.io/projected/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-kube-api-access-npcsm\") pod \"nova-api-0\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.232616 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.715380 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.791870 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f6ce8bd-ed80-467b-bb3f-970b158cad4e","Type":"ContainerStarted","Data":"9fa2be6c92668e5c74b33917145b8c41f1f890c240b0c54b4afd037e7ab6b379"} Nov 25 20:46:32 crc kubenswrapper[4983]: I1125 20:46:32.798423 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerStarted","Data":"ad79feaaa8b4f74f9a310fc7c8e95feaa9b82fa6cc4e91f4919e0ed0933d7ab8"} Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.294757 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.317370 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.616947 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a55fb6-5054-4e18-8125-e7e4f7e2009e" path="/var/lib/kubelet/pods/e8a55fb6-5054-4e18-8125-e7e4f7e2009e/volumes" Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.810246 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f6ce8bd-ed80-467b-bb3f-970b158cad4e","Type":"ContainerStarted","Data":"696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd"} Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.811453 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f6ce8bd-ed80-467b-bb3f-970b158cad4e","Type":"ContainerStarted","Data":"b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa"} Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.813153 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerStarted","Data":"2be585ab05231d9a05d3de879bfa426f74f686ab1cc023442314e8a5e8e72d8e"} Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.813569 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-central-agent" containerID="cri-o://2e1156b52ae1b410622b51cac3062f7c9f466822489a6255b86510e9a5b6c408" gracePeriod=30 Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.813758 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="proxy-httpd" containerID="cri-o://2be585ab05231d9a05d3de879bfa426f74f686ab1cc023442314e8a5e8e72d8e" gracePeriod=30 Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.813868 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="sg-core" containerID="cri-o://ad79feaaa8b4f74f9a310fc7c8e95feaa9b82fa6cc4e91f4919e0ed0933d7ab8" gracePeriod=30 Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.813975 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-notification-agent" containerID="cri-o://4084938c90ad7f7d494b1cea55c3a38bc998ed26efe9c1707d782cb81c2692e3" gracePeriod=30 Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.837790 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.863941 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.863920156 podStartE2EDuration="2.863920156s" podCreationTimestamp="2025-11-25 20:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:33.846174276 +0000 UTC m=+1174.958707668" watchObservedRunningTime="2025-11-25 20:46:33.863920156 +0000 UTC m=+1174.976453548" Nov 25 20:46:33 crc kubenswrapper[4983]: I1125 20:46:33.902690 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.296006157 podStartE2EDuration="5.902668752s" podCreationTimestamp="2025-11-25 20:46:28 +0000 UTC" firstStartedPulling="2025-11-25 20:46:29.530505518 +0000 UTC m=+1170.643038910" lastFinishedPulling="2025-11-25 20:46:33.137168103 +0000 UTC m=+1174.249701505" observedRunningTime="2025-11-25 20:46:33.900666889 +0000 UTC m=+1175.013200301" watchObservedRunningTime="2025-11-25 20:46:33.902668752 +0000 UTC m=+1175.015202144" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.121175 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q7h2t"] Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.122596 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.125206 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.168657 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q7h2t"] Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.183122 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.200917 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp92b\" (UniqueName: \"kubernetes.io/projected/a28d6ec9-9763-4034-ada5-549b22bf6607-kube-api-access-bp92b\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.200968 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-config-data\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.201001 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-scripts\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.201033 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.302901 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp92b\" (UniqueName: \"kubernetes.io/projected/a28d6ec9-9763-4034-ada5-549b22bf6607-kube-api-access-bp92b\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.303166 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-config-data\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.303278 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-scripts\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.303357 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.310973 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.311430 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-scripts\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.315204 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-config-data\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.323005 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp92b\" (UniqueName: \"kubernetes.io/projected/a28d6ec9-9763-4034-ada5-549b22bf6607-kube-api-access-bp92b\") pod \"nova-cell1-cell-mapping-q7h2t\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.506702 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.826499 4983 generic.go:334] "Generic (PLEG): container finished" podID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerID="2be585ab05231d9a05d3de879bfa426f74f686ab1cc023442314e8a5e8e72d8e" exitCode=0 Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.827021 4983 generic.go:334] "Generic (PLEG): container finished" podID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerID="ad79feaaa8b4f74f9a310fc7c8e95feaa9b82fa6cc4e91f4919e0ed0933d7ab8" exitCode=2 Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.827032 4983 generic.go:334] "Generic (PLEG): container finished" podID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerID="4084938c90ad7f7d494b1cea55c3a38bc998ed26efe9c1707d782cb81c2692e3" exitCode=0 Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.826696 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerDied","Data":"2be585ab05231d9a05d3de879bfa426f74f686ab1cc023442314e8a5e8e72d8e"} Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.827948 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerDied","Data":"ad79feaaa8b4f74f9a310fc7c8e95feaa9b82fa6cc4e91f4919e0ed0933d7ab8"} Nov 25 20:46:34 crc kubenswrapper[4983]: I1125 20:46:34.827968 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerDied","Data":"4084938c90ad7f7d494b1cea55c3a38bc998ed26efe9c1707d782cb81c2692e3"} Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.014317 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q7h2t"] Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.272783 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.360031 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rqg7j"] Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.360400 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" containerName="dnsmasq-dns" containerID="cri-o://3d511550f6f50f21762d0a13d5eab8144b2a675eeaa76f06b8b92c3cecf1f47d" gracePeriod=10 Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.838473 4983 generic.go:334] "Generic (PLEG): container finished" podID="27e29df2-0e09-486f-afe1-fb74a909567c" containerID="3d511550f6f50f21762d0a13d5eab8144b2a675eeaa76f06b8b92c3cecf1f47d" exitCode=0 Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.838534 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" event={"ID":"27e29df2-0e09-486f-afe1-fb74a909567c","Type":"ContainerDied","Data":"3d511550f6f50f21762d0a13d5eab8144b2a675eeaa76f06b8b92c3cecf1f47d"} Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.838845 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" event={"ID":"27e29df2-0e09-486f-afe1-fb74a909567c","Type":"ContainerDied","Data":"aa9657ef8fb07211a70a86ee63f0e28d1fc7d0a80cf0b5af767f237d51d4dfd3"} Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.838866 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa9657ef8fb07211a70a86ee63f0e28d1fc7d0a80cf0b5af767f237d51d4dfd3" Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.841362 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q7h2t" event={"ID":"a28d6ec9-9763-4034-ada5-549b22bf6607","Type":"ContainerStarted","Data":"28e4630680fced936f2d100c4632c7a0cc62c5fed9b7d1a72528a45e0ccf7215"} Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.841393 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q7h2t" event={"ID":"a28d6ec9-9763-4034-ada5-549b22bf6607","Type":"ContainerStarted","Data":"19b4360a4afd41d0cc932e7b65ef7929d4bdc86ea6ad1c857ec3837ca267c2ef"} Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.864881 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.869046 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q7h2t" podStartSLOduration=1.869022704 podStartE2EDuration="1.869022704s" podCreationTimestamp="2025-11-25 20:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:35.857338525 +0000 UTC m=+1176.969871917" watchObservedRunningTime="2025-11-25 20:46:35.869022704 +0000 UTC m=+1176.981556096" Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.953274 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-swift-storage-0\") pod \"27e29df2-0e09-486f-afe1-fb74a909567c\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.953318 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-config\") pod \"27e29df2-0e09-486f-afe1-fb74a909567c\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.953383 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-sb\") pod \"27e29df2-0e09-486f-afe1-fb74a909567c\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.953423 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqrz\" (UniqueName: \"kubernetes.io/projected/27e29df2-0e09-486f-afe1-fb74a909567c-kube-api-access-clqrz\") pod \"27e29df2-0e09-486f-afe1-fb74a909567c\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.953508 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-nb\") pod \"27e29df2-0e09-486f-afe1-fb74a909567c\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.953624 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-svc\") pod \"27e29df2-0e09-486f-afe1-fb74a909567c\" (UID: \"27e29df2-0e09-486f-afe1-fb74a909567c\") " Nov 25 20:46:35 crc kubenswrapper[4983]: I1125 20:46:35.964843 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e29df2-0e09-486f-afe1-fb74a909567c-kube-api-access-clqrz" (OuterVolumeSpecName: "kube-api-access-clqrz") pod "27e29df2-0e09-486f-afe1-fb74a909567c" (UID: "27e29df2-0e09-486f-afe1-fb74a909567c"). InnerVolumeSpecName "kube-api-access-clqrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.014139 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27e29df2-0e09-486f-afe1-fb74a909567c" (UID: "27e29df2-0e09-486f-afe1-fb74a909567c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.041686 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27e29df2-0e09-486f-afe1-fb74a909567c" (UID: "27e29df2-0e09-486f-afe1-fb74a909567c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.058901 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqrz\" (UniqueName: \"kubernetes.io/projected/27e29df2-0e09-486f-afe1-fb74a909567c-kube-api-access-clqrz\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.058937 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.058948 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.061085 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-config" (OuterVolumeSpecName: "config") pod "27e29df2-0e09-486f-afe1-fb74a909567c" (UID: "27e29df2-0e09-486f-afe1-fb74a909567c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.063678 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27e29df2-0e09-486f-afe1-fb74a909567c" (UID: "27e29df2-0e09-486f-afe1-fb74a909567c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.066075 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27e29df2-0e09-486f-afe1-fb74a909567c" (UID: "27e29df2-0e09-486f-afe1-fb74a909567c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.160328 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.160357 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.160368 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27e29df2-0e09-486f-afe1-fb74a909567c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.868601 4983 generic.go:334] "Generic (PLEG): container finished" podID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerID="2e1156b52ae1b410622b51cac3062f7c9f466822489a6255b86510e9a5b6c408" exitCode=0 Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.869192 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-rqg7j" Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.870543 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerDied","Data":"2e1156b52ae1b410622b51cac3062f7c9f466822489a6255b86510e9a5b6c408"} Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.912710 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rqg7j"] Nov 25 20:46:36 crc kubenswrapper[4983]: I1125 20:46:36.921191 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-rqg7j"] Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.032340 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079546 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svgvr\" (UniqueName: \"kubernetes.io/projected/6388d4b7-e90a-42ae-9aa3-a537bccca436-kube-api-access-svgvr\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079631 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-run-httpd\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079741 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-log-httpd\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079799 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-scripts\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079839 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-sg-core-conf-yaml\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079856 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-combined-ca-bundle\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.079932 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-config-data\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.080008 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-ceilometer-tls-certs\") pod \"6388d4b7-e90a-42ae-9aa3-a537bccca436\" (UID: \"6388d4b7-e90a-42ae-9aa3-a537bccca436\") " Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.081881 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.082167 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.088911 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-scripts" (OuterVolumeSpecName: "scripts") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.102152 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6388d4b7-e90a-42ae-9aa3-a537bccca436-kube-api-access-svgvr" (OuterVolumeSpecName: "kube-api-access-svgvr") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "kube-api-access-svgvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.113262 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.147865 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.182459 4983 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.182498 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svgvr\" (UniqueName: \"kubernetes.io/projected/6388d4b7-e90a-42ae-9aa3-a537bccca436-kube-api-access-svgvr\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.182514 4983 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.182526 4983 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6388d4b7-e90a-42ae-9aa3-a537bccca436-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.182538 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.182553 4983 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.206133 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.222128 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-config-data" (OuterVolumeSpecName: "config-data") pod "6388d4b7-e90a-42ae-9aa3-a537bccca436" (UID: "6388d4b7-e90a-42ae-9aa3-a537bccca436"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.285051 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.285091 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6388d4b7-e90a-42ae-9aa3-a537bccca436-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.621585 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" path="/var/lib/kubelet/pods/27e29df2-0e09-486f-afe1-fb74a909567c/volumes" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.882140 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6388d4b7-e90a-42ae-9aa3-a537bccca436","Type":"ContainerDied","Data":"85b87bde0e3a36c847be993d80e167930b074ff0cbfce5dec057c86285d11510"} Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.882200 4983 scope.go:117] "RemoveContainer" containerID="2be585ab05231d9a05d3de879bfa426f74f686ab1cc023442314e8a5e8e72d8e" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.882216 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.911872 4983 scope.go:117] "RemoveContainer" containerID="ad79feaaa8b4f74f9a310fc7c8e95feaa9b82fa6cc4e91f4919e0ed0933d7ab8" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.923723 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.932425 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.935668 4983 scope.go:117] "RemoveContainer" containerID="4084938c90ad7f7d494b1cea55c3a38bc998ed26efe9c1707d782cb81c2692e3" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.960622 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:37 crc kubenswrapper[4983]: E1125 20:46:37.961040 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="proxy-httpd" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961059 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="proxy-httpd" Nov 25 20:46:37 crc kubenswrapper[4983]: E1125 20:46:37.961073 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-central-agent" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961081 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-central-agent" Nov 25 20:46:37 crc kubenswrapper[4983]: E1125 20:46:37.961097 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" containerName="init" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961103 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" containerName="init" Nov 25 20:46:37 crc kubenswrapper[4983]: E1125 20:46:37.961137 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="sg-core" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961142 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="sg-core" Nov 25 20:46:37 crc kubenswrapper[4983]: E1125 20:46:37.961150 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-notification-agent" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961155 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-notification-agent" Nov 25 20:46:37 crc kubenswrapper[4983]: E1125 20:46:37.961169 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" containerName="dnsmasq-dns" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961174 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" containerName="dnsmasq-dns" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961350 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-notification-agent" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961370 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="ceilometer-central-agent" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961382 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="sg-core" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961393 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e29df2-0e09-486f-afe1-fb74a909567c" containerName="dnsmasq-dns" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.961405 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" containerName="proxy-httpd" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.965708 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.971521 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.971837 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.972164 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.982351 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:37 crc kubenswrapper[4983]: I1125 20:46:37.985536 4983 scope.go:117] "RemoveContainer" containerID="2e1156b52ae1b410622b51cac3062f7c9f466822489a6255b86510e9a5b6c408" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.016855 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.016931 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0a145c6-0515-4cd9-98d1-438f069496e8-run-httpd\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.017050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc9q\" (UniqueName: \"kubernetes.io/projected/a0a145c6-0515-4cd9-98d1-438f069496e8-kube-api-access-nbc9q\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.017104 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-config-data\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.017204 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.017224 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-scripts\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.017261 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0a145c6-0515-4cd9-98d1-438f069496e8-log-httpd\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.017412 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119365 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0a145c6-0515-4cd9-98d1-438f069496e8-run-httpd\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119461 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc9q\" (UniqueName: \"kubernetes.io/projected/a0a145c6-0515-4cd9-98d1-438f069496e8-kube-api-access-nbc9q\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119513 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-config-data\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119621 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119654 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-scripts\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119696 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0a145c6-0515-4cd9-98d1-438f069496e8-log-httpd\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119841 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.119906 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.121760 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0a145c6-0515-4cd9-98d1-438f069496e8-log-httpd\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.121781 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0a145c6-0515-4cd9-98d1-438f069496e8-run-httpd\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.124652 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.125146 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-config-data\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.126976 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.127730 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.149342 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0a145c6-0515-4cd9-98d1-438f069496e8-scripts\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.156979 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc9q\" (UniqueName: \"kubernetes.io/projected/a0a145c6-0515-4cd9-98d1-438f069496e8-kube-api-access-nbc9q\") pod \"ceilometer-0\" (UID: \"a0a145c6-0515-4cd9-98d1-438f069496e8\") " pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.289030 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.786774 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 20:46:38 crc kubenswrapper[4983]: W1125 20:46:38.789364 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0a145c6_0515_4cd9_98d1_438f069496e8.slice/crio-c3480590a54fbcb965e1a3646b83be26a20503adcd025d6d215e2ecff44cb587 WatchSource:0}: Error finding container c3480590a54fbcb965e1a3646b83be26a20503adcd025d6d215e2ecff44cb587: Status 404 returned error can't find the container with id c3480590a54fbcb965e1a3646b83be26a20503adcd025d6d215e2ecff44cb587 Nov 25 20:46:38 crc kubenswrapper[4983]: I1125 20:46:38.895179 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0a145c6-0515-4cd9-98d1-438f069496e8","Type":"ContainerStarted","Data":"c3480590a54fbcb965e1a3646b83be26a20503adcd025d6d215e2ecff44cb587"} Nov 25 20:46:39 crc kubenswrapper[4983]: I1125 20:46:39.642895 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6388d4b7-e90a-42ae-9aa3-a537bccca436" path="/var/lib/kubelet/pods/6388d4b7-e90a-42ae-9aa3-a537bccca436/volumes" Nov 25 20:46:39 crc kubenswrapper[4983]: I1125 20:46:39.910473 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0a145c6-0515-4cd9-98d1-438f069496e8","Type":"ContainerStarted","Data":"0c91e59a4f5e356a074c383b72cc7471552b90944e73acae79050db467f8b10e"} Nov 25 20:46:40 crc kubenswrapper[4983]: I1125 20:46:40.923791 4983 generic.go:334] "Generic (PLEG): container finished" podID="a28d6ec9-9763-4034-ada5-549b22bf6607" containerID="28e4630680fced936f2d100c4632c7a0cc62c5fed9b7d1a72528a45e0ccf7215" exitCode=0 Nov 25 20:46:40 crc kubenswrapper[4983]: I1125 20:46:40.924915 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q7h2t" event={"ID":"a28d6ec9-9763-4034-ada5-549b22bf6607","Type":"ContainerDied","Data":"28e4630680fced936f2d100c4632c7a0cc62c5fed9b7d1a72528a45e0ccf7215"} Nov 25 20:46:40 crc kubenswrapper[4983]: I1125 20:46:40.927748 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0a145c6-0515-4cd9-98d1-438f069496e8","Type":"ContainerStarted","Data":"54840ad6c8a2029c8bf73429662008c911f46c6a64da2643b19210c4d4092d0d"} Nov 25 20:46:40 crc kubenswrapper[4983]: I1125 20:46:40.927897 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0a145c6-0515-4cd9-98d1-438f069496e8","Type":"ContainerStarted","Data":"c890ac85fa827893f761514f368f8ee20502ee83c53f80b749e9c351e88bc56c"} Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.233671 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.234445 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.337996 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.412652 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-scripts\") pod \"a28d6ec9-9763-4034-ada5-549b22bf6607\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.412730 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-combined-ca-bundle\") pod \"a28d6ec9-9763-4034-ada5-549b22bf6607\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.412782 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-config-data\") pod \"a28d6ec9-9763-4034-ada5-549b22bf6607\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.412887 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp92b\" (UniqueName: \"kubernetes.io/projected/a28d6ec9-9763-4034-ada5-549b22bf6607-kube-api-access-bp92b\") pod \"a28d6ec9-9763-4034-ada5-549b22bf6607\" (UID: \"a28d6ec9-9763-4034-ada5-549b22bf6607\") " Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.421823 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a28d6ec9-9763-4034-ada5-549b22bf6607-kube-api-access-bp92b" (OuterVolumeSpecName: "kube-api-access-bp92b") pod "a28d6ec9-9763-4034-ada5-549b22bf6607" (UID: "a28d6ec9-9763-4034-ada5-549b22bf6607"). InnerVolumeSpecName "kube-api-access-bp92b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.430823 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-scripts" (OuterVolumeSpecName: "scripts") pod "a28d6ec9-9763-4034-ada5-549b22bf6607" (UID: "a28d6ec9-9763-4034-ada5-549b22bf6607"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.452471 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-config-data" (OuterVolumeSpecName: "config-data") pod "a28d6ec9-9763-4034-ada5-549b22bf6607" (UID: "a28d6ec9-9763-4034-ada5-549b22bf6607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.454331 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a28d6ec9-9763-4034-ada5-549b22bf6607" (UID: "a28d6ec9-9763-4034-ada5-549b22bf6607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.514689 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.514734 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp92b\" (UniqueName: \"kubernetes.io/projected/a28d6ec9-9763-4034-ada5-549b22bf6607-kube-api-access-bp92b\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.514939 4983 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.514952 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a28d6ec9-9763-4034-ada5-549b22bf6607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.952006 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q7h2t" event={"ID":"a28d6ec9-9763-4034-ada5-549b22bf6607","Type":"ContainerDied","Data":"19b4360a4afd41d0cc932e7b65ef7929d4bdc86ea6ad1c857ec3837ca267c2ef"} Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.952111 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b4360a4afd41d0cc932e7b65ef7929d4bdc86ea6ad1c857ec3837ca267c2ef" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.952043 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q7h2t" Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.957147 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0a145c6-0515-4cd9-98d1-438f069496e8","Type":"ContainerStarted","Data":"98a8de6cbea33199c4975a63c677b6c52804578fb73c6910ab6b3d1be8e33c90"} Nov 25 20:46:42 crc kubenswrapper[4983]: I1125 20:46:42.961658 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.046667 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.983103578 podStartE2EDuration="6.046638728s" podCreationTimestamp="2025-11-25 20:46:37 +0000 UTC" firstStartedPulling="2025-11-25 20:46:38.791361547 +0000 UTC m=+1179.903894939" lastFinishedPulling="2025-11-25 20:46:41.854896687 +0000 UTC m=+1182.967430089" observedRunningTime="2025-11-25 20:46:43.037137407 +0000 UTC m=+1184.149670829" watchObservedRunningTime="2025-11-25 20:46:43.046638728 +0000 UTC m=+1184.159172120" Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.150314 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.150712 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-api" containerID="cri-o://696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd" gracePeriod=30 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.150943 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-log" containerID="cri-o://b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa" gracePeriod=30 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.166075 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": EOF" Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.166160 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": EOF" Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.171036 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.171356 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" containerName="nova-scheduler-scheduler" containerID="cri-o://6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912" gracePeriod=30 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.248161 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.248467 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-log" containerID="cri-o://2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9" gracePeriod=30 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.249050 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-metadata" containerID="cri-o://4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d" gracePeriod=30 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.970164 4983 generic.go:334] "Generic (PLEG): container finished" podID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerID="b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa" exitCode=143 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.970229 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f6ce8bd-ed80-467b-bb3f-970b158cad4e","Type":"ContainerDied","Data":"b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa"} Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.972760 4983 generic.go:334] "Generic (PLEG): container finished" podID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerID="2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9" exitCode=143 Nov 25 20:46:43 crc kubenswrapper[4983]: I1125 20:46:43.972804 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b5c408b-2bfc-4f6a-af31-b877a0e9685f","Type":"ContainerDied","Data":"2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9"} Nov 25 20:46:44 crc kubenswrapper[4983]: E1125 20:46:44.017195 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 20:46:44 crc kubenswrapper[4983]: E1125 20:46:44.020500 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 20:46:44 crc kubenswrapper[4983]: E1125 20:46:44.021982 4983 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 20:46:44 crc kubenswrapper[4983]: E1125 20:46:44.022026 4983 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" containerName="nova-scheduler-scheduler" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.415995 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:38592->10.217.0.196:8775: read: connection reset by peer" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.416255 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:38608->10.217.0.196:8775: read: connection reset by peer" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.866892 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.955600 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8hdn\" (UniqueName: \"kubernetes.io/projected/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-kube-api-access-v8hdn\") pod \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.955921 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-combined-ca-bundle\") pod \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.955992 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-config-data\") pod \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.956074 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-nova-metadata-tls-certs\") pod \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.956209 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-logs\") pod \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\" (UID: \"8b5c408b-2bfc-4f6a-af31-b877a0e9685f\") " Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.957712 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-logs" (OuterVolumeSpecName: "logs") pod "8b5c408b-2bfc-4f6a-af31-b877a0e9685f" (UID: "8b5c408b-2bfc-4f6a-af31-b877a0e9685f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.963926 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-kube-api-access-v8hdn" (OuterVolumeSpecName: "kube-api-access-v8hdn") pod "8b5c408b-2bfc-4f6a-af31-b877a0e9685f" (UID: "8b5c408b-2bfc-4f6a-af31-b877a0e9685f"). InnerVolumeSpecName "kube-api-access-v8hdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.999172 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-config-data" (OuterVolumeSpecName: "config-data") pod "8b5c408b-2bfc-4f6a-af31-b877a0e9685f" (UID: "8b5c408b-2bfc-4f6a-af31-b877a0e9685f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:46 crc kubenswrapper[4983]: I1125 20:46:46.999865 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b5c408b-2bfc-4f6a-af31-b877a0e9685f" (UID: "8b5c408b-2bfc-4f6a-af31-b877a0e9685f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.028207 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8b5c408b-2bfc-4f6a-af31-b877a0e9685f" (UID: "8b5c408b-2bfc-4f6a-af31-b877a0e9685f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.060731 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.060789 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.060799 4983 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.060811 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.060823 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8hdn\" (UniqueName: \"kubernetes.io/projected/8b5c408b-2bfc-4f6a-af31-b877a0e9685f-kube-api-access-v8hdn\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.070044 4983 generic.go:334] "Generic (PLEG): container finished" podID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerID="4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d" exitCode=0 Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.070116 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b5c408b-2bfc-4f6a-af31-b877a0e9685f","Type":"ContainerDied","Data":"4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d"} Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.070130 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.070164 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8b5c408b-2bfc-4f6a-af31-b877a0e9685f","Type":"ContainerDied","Data":"433e2781cf2a04056f46aed22b6896255405e13ca444152c42f0c60ac67ac8de"} Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.070201 4983 scope.go:117] "RemoveContainer" containerID="4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.096162 4983 scope.go:117] "RemoveContainer" containerID="2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.111584 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.126515 4983 scope.go:117] "RemoveContainer" containerID="4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d" Nov 25 20:46:47 crc kubenswrapper[4983]: E1125 20:46:47.130792 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d\": container with ID starting with 4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d not found: ID does not exist" containerID="4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.131048 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d"} err="failed to get container status \"4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d\": rpc error: code = NotFound desc = could not find container \"4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d\": container with ID starting with 4cb2a2648baeedd522766519e387b6f2e1299995dd9f93f7076c5115d8e4481d not found: ID does not exist" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.131076 4983 scope.go:117] "RemoveContainer" containerID="2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9" Nov 25 20:46:47 crc kubenswrapper[4983]: E1125 20:46:47.131653 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9\": container with ID starting with 2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9 not found: ID does not exist" containerID="2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.131714 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9"} err="failed to get container status \"2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9\": rpc error: code = NotFound desc = could not find container \"2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9\": container with ID starting with 2a2093050b4579d2f96a512641a9c999c449dbce7bcf04eeb438039a167355b9 not found: ID does not exist" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.135750 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.139865 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:47 crc kubenswrapper[4983]: E1125 20:46:47.141049 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-log" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.141158 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-log" Nov 25 20:46:47 crc kubenswrapper[4983]: E1125 20:46:47.141266 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-metadata" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.141328 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-metadata" Nov 25 20:46:47 crc kubenswrapper[4983]: E1125 20:46:47.141415 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a28d6ec9-9763-4034-ada5-549b22bf6607" containerName="nova-manage" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.141482 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a28d6ec9-9763-4034-ada5-549b22bf6607" containerName="nova-manage" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.142194 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="a28d6ec9-9763-4034-ada5-549b22bf6607" containerName="nova-manage" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.142316 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-log" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.142411 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" containerName="nova-metadata-metadata" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.155477 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.185345 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.208622 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.209908 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.269480 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.269571 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-logs\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.270027 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-config-data\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.270088 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.270125 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2kqp\" (UniqueName: \"kubernetes.io/projected/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-kube-api-access-g2kqp\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.372010 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-logs\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.372161 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-config-data\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.372185 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.372206 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2kqp\" (UniqueName: \"kubernetes.io/projected/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-kube-api-access-g2kqp\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.372234 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.373183 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-logs\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.375494 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.376534 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.376978 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-config-data\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.392336 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2kqp\" (UniqueName: \"kubernetes.io/projected/eba27c66-d8be-4e3c-a39c-2f521c69a3d6-kube-api-access-g2kqp\") pod \"nova-metadata-0\" (UID: \"eba27c66-d8be-4e3c-a39c-2f521c69a3d6\") " pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.535205 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.620022 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5c408b-2bfc-4f6a-af31-b877a0e9685f" path="/var/lib/kubelet/pods/8b5c408b-2bfc-4f6a-af31-b877a0e9685f/volumes" Nov 25 20:46:47 crc kubenswrapper[4983]: I1125 20:46:47.995959 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.079518 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eba27c66-d8be-4e3c-a39c-2f521c69a3d6","Type":"ContainerStarted","Data":"2874c4167412c41b08eec95b6bf268c181c7f0189210cf5f67b0f1fa4e0c23a8"} Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.083164 4983 generic.go:334] "Generic (PLEG): container finished" podID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" containerID="6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912" exitCode=0 Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.083216 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1","Type":"ContainerDied","Data":"6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912"} Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.083246 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1","Type":"ContainerDied","Data":"325e1aa7f7885d946b915bf9940324ec9b7fdaf6ea4eab732d7cb9771bc5cc41"} Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.083258 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="325e1aa7f7885d946b915bf9940324ec9b7fdaf6ea4eab732d7cb9771bc5cc41" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.105728 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.186688 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-config-data\") pod \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.186820 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hflk\" (UniqueName: \"kubernetes.io/projected/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-kube-api-access-8hflk\") pod \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.186894 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-combined-ca-bundle\") pod \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\" (UID: \"20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1\") " Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.193928 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-kube-api-access-8hflk" (OuterVolumeSpecName: "kube-api-access-8hflk") pod "20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" (UID: "20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1"). InnerVolumeSpecName "kube-api-access-8hflk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.230591 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" (UID: "20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.237075 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-config-data" (OuterVolumeSpecName: "config-data") pod "20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" (UID: "20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.288806 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hflk\" (UniqueName: \"kubernetes.io/projected/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-kube-api-access-8hflk\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.288842 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:48 crc kubenswrapper[4983]: I1125 20:46:48.288857 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.040512 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.096132 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eba27c66-d8be-4e3c-a39c-2f521c69a3d6","Type":"ContainerStarted","Data":"3b87fa6be3a49cca85231045d2432f0040a5f9b313768800fc9971dcb5591471"} Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.096223 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eba27c66-d8be-4e3c-a39c-2f521c69a3d6","Type":"ContainerStarted","Data":"25606ed0e6b18fd52cb170b3e88d2f2523bb66ea48bf0be52d7316ecfbf32399"} Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.101669 4983 generic.go:334] "Generic (PLEG): container finished" podID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerID="696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd" exitCode=0 Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.101861 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.101982 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f6ce8bd-ed80-467b-bb3f-970b158cad4e","Type":"ContainerDied","Data":"696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd"} Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.102055 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4f6ce8bd-ed80-467b-bb3f-970b158cad4e","Type":"ContainerDied","Data":"9fa2be6c92668e5c74b33917145b8c41f1f890c240b0c54b4afd037e7ab6b379"} Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.102109 4983 scope.go:117] "RemoveContainer" containerID="696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.103858 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.104034 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-config-data\") pod \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.104205 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-logs\") pod \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.104267 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-combined-ca-bundle\") pod \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.104300 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-public-tls-certs\") pod \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.104323 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-internal-tls-certs\") pod \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.104345 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npcsm\" (UniqueName: \"kubernetes.io/projected/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-kube-api-access-npcsm\") pod \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\" (UID: \"4f6ce8bd-ed80-467b-bb3f-970b158cad4e\") " Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.106066 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-logs" (OuterVolumeSpecName: "logs") pod "4f6ce8bd-ed80-467b-bb3f-970b158cad4e" (UID: "4f6ce8bd-ed80-467b-bb3f-970b158cad4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.121284 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-kube-api-access-npcsm" (OuterVolumeSpecName: "kube-api-access-npcsm") pod "4f6ce8bd-ed80-467b-bb3f-970b158cad4e" (UID: "4f6ce8bd-ed80-467b-bb3f-970b158cad4e"). InnerVolumeSpecName "kube-api-access-npcsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.139527 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.139497713 podStartE2EDuration="2.139497713s" podCreationTimestamp="2025-11-25 20:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:49.127068024 +0000 UTC m=+1190.239601416" watchObservedRunningTime="2025-11-25 20:46:49.139497713 +0000 UTC m=+1190.252031105" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.149506 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f6ce8bd-ed80-467b-bb3f-970b158cad4e" (UID: "4f6ce8bd-ed80-467b-bb3f-970b158cad4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.151082 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-config-data" (OuterVolumeSpecName: "config-data") pod "4f6ce8bd-ed80-467b-bb3f-970b158cad4e" (UID: "4f6ce8bd-ed80-467b-bb3f-970b158cad4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.204948 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.205275 4983 scope.go:117] "RemoveContainer" containerID="b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.205386 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4f6ce8bd-ed80-467b-bb3f-970b158cad4e" (UID: "4f6ce8bd-ed80-467b-bb3f-970b158cad4e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.207599 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npcsm\" (UniqueName: \"kubernetes.io/projected/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-kube-api-access-npcsm\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.207630 4983 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.207639 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.207649 4983 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-logs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.207656 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.219522 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.236640 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4f6ce8bd-ed80-467b-bb3f-970b158cad4e" (UID: "4f6ce8bd-ed80-467b-bb3f-970b158cad4e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.250154 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: E1125 20:46:49.252148 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-log" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.252177 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-log" Nov 25 20:46:49 crc kubenswrapper[4983]: E1125 20:46:49.252215 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-api" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.252225 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-api" Nov 25 20:46:49 crc kubenswrapper[4983]: E1125 20:46:49.252257 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" containerName="nova-scheduler-scheduler" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.252266 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" containerName="nova-scheduler-scheduler" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.252545 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-api" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.252585 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" containerName="nova-api-log" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.252600 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" containerName="nova-scheduler-scheduler" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.253531 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.256455 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.262170 4983 scope.go:117] "RemoveContainer" containerID="696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd" Nov 25 20:46:49 crc kubenswrapper[4983]: E1125 20:46:49.262600 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd\": container with ID starting with 696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd not found: ID does not exist" containerID="696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.262642 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd"} err="failed to get container status \"696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd\": rpc error: code = NotFound desc = could not find container \"696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd\": container with ID starting with 696b2ac8d9038a7fbcea70c592b1e6468ccfb07a74da7fcea50c37f9840617fd not found: ID does not exist" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.262671 4983 scope.go:117] "RemoveContainer" containerID="b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.262692 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: E1125 20:46:49.262992 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa\": container with ID starting with b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa not found: ID does not exist" containerID="b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.263028 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa"} err="failed to get container status \"b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa\": rpc error: code = NotFound desc = could not find container \"b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa\": container with ID starting with b30ba184bb72dbfae1f63ba5f719dd36807e604a27e62c8051593867bbe158fa not found: ID does not exist" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.309074 4983 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6ce8bd-ed80-467b-bb3f-970b158cad4e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.411331 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa32390-cf93-44b1-b27f-4b66ffb61a41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.411443 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa32390-cf93-44b1-b27f-4b66ffb61a41-config-data\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.411835 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4g28\" (UniqueName: \"kubernetes.io/projected/0aa32390-cf93-44b1-b27f-4b66ffb61a41-kube-api-access-z4g28\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.442730 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.454677 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.466550 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.470208 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.472957 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.473173 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.473176 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.492797 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.514448 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa32390-cf93-44b1-b27f-4b66ffb61a41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.514532 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa32390-cf93-44b1-b27f-4b66ffb61a41-config-data\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.514704 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4g28\" (UniqueName: \"kubernetes.io/projected/0aa32390-cf93-44b1-b27f-4b66ffb61a41-kube-api-access-z4g28\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.541061 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aa32390-cf93-44b1-b27f-4b66ffb61a41-config-data\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.542706 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aa32390-cf93-44b1-b27f-4b66ffb61a41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.546438 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4g28\" (UniqueName: \"kubernetes.io/projected/0aa32390-cf93-44b1-b27f-4b66ffb61a41-kube-api-access-z4g28\") pod \"nova-scheduler-0\" (UID: \"0aa32390-cf93-44b1-b27f-4b66ffb61a41\") " pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.578793 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.617021 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.617088 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.617111 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-public-tls-certs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.617185 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2t7s\" (UniqueName: \"kubernetes.io/projected/b199cea8-e855-4316-b80b-8cad8bce9f45-kube-api-access-r2t7s\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.617295 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b199cea8-e855-4316-b80b-8cad8bce9f45-logs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.617323 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-config-data\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.618656 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1" path="/var/lib/kubelet/pods/20e7a2fb-4c0f-4a72-a8bb-794b6860c3e1/volumes" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.619482 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6ce8bd-ed80-467b-bb3f-970b158cad4e" path="/var/lib/kubelet/pods/4f6ce8bd-ed80-467b-bb3f-970b158cad4e/volumes" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.718818 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b199cea8-e855-4316-b80b-8cad8bce9f45-logs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.719314 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-config-data\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.719452 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.719509 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.719537 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-public-tls-certs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.719650 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2t7s\" (UniqueName: \"kubernetes.io/projected/b199cea8-e855-4316-b80b-8cad8bce9f45-kube-api-access-r2t7s\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.722871 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b199cea8-e855-4316-b80b-8cad8bce9f45-logs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.732862 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-public-tls-certs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.733622 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.734333 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.734553 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b199cea8-e855-4316-b80b-8cad8bce9f45-config-data\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.740369 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2t7s\" (UniqueName: \"kubernetes.io/projected/b199cea8-e855-4316-b80b-8cad8bce9f45-kube-api-access-r2t7s\") pod \"nova-api-0\" (UID: \"b199cea8-e855-4316-b80b-8cad8bce9f45\") " pod="openstack/nova-api-0" Nov 25 20:46:49 crc kubenswrapper[4983]: I1125 20:46:49.893162 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 20:46:50 crc kubenswrapper[4983]: I1125 20:46:50.025903 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 20:46:50 crc kubenswrapper[4983]: I1125 20:46:50.115658 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0aa32390-cf93-44b1-b27f-4b66ffb61a41","Type":"ContainerStarted","Data":"800c0c7035283ef16b1837fb812276276cac08fecc80805779361bafdfedeb72"} Nov 25 20:46:50 crc kubenswrapper[4983]: I1125 20:46:50.844406 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 20:46:51 crc kubenswrapper[4983]: I1125 20:46:51.127950 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0aa32390-cf93-44b1-b27f-4b66ffb61a41","Type":"ContainerStarted","Data":"b513f94977b6b08e238e57ea2aaf4fee787fe5536e9f402d98bc7fc11c09fcca"} Nov 25 20:46:51 crc kubenswrapper[4983]: I1125 20:46:51.132413 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b199cea8-e855-4316-b80b-8cad8bce9f45","Type":"ContainerStarted","Data":"8ab6ec170717e88b6a0f21902a9cb3681a7f130e8720efc1efd579c509b5d5b2"} Nov 25 20:46:51 crc kubenswrapper[4983]: I1125 20:46:51.132591 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b199cea8-e855-4316-b80b-8cad8bce9f45","Type":"ContainerStarted","Data":"cbbd352593b1dc5d75020e7752abdd9dcb765f28f3945df1117b04a7bd97cf96"} Nov 25 20:46:51 crc kubenswrapper[4983]: I1125 20:46:51.155160 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.15514112 podStartE2EDuration="2.15514112s" podCreationTimestamp="2025-11-25 20:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:51.149774498 +0000 UTC m=+1192.262307900" watchObservedRunningTime="2025-11-25 20:46:51.15514112 +0000 UTC m=+1192.267674512" Nov 25 20:46:52 crc kubenswrapper[4983]: I1125 20:46:52.150026 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b199cea8-e855-4316-b80b-8cad8bce9f45","Type":"ContainerStarted","Data":"de7e92be8d262573fcecda0f0e628d0609a32e5e5cd5c5461d28e30f6f0d0e2b"} Nov 25 20:46:52 crc kubenswrapper[4983]: I1125 20:46:52.185344 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.185323784 podStartE2EDuration="3.185323784s" podCreationTimestamp="2025-11-25 20:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:46:52.178250977 +0000 UTC m=+1193.290784389" watchObservedRunningTime="2025-11-25 20:46:52.185323784 +0000 UTC m=+1193.297857166" Nov 25 20:46:52 crc kubenswrapper[4983]: I1125 20:46:52.536532 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 20:46:52 crc kubenswrapper[4983]: I1125 20:46:52.536636 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 20:46:54 crc kubenswrapper[4983]: I1125 20:46:54.579541 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 20:46:57 crc kubenswrapper[4983]: I1125 20:46:57.537692 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 20:46:57 crc kubenswrapper[4983]: I1125 20:46:57.539030 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 20:46:58 crc kubenswrapper[4983]: I1125 20:46:58.547830 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eba27c66-d8be-4e3c-a39c-2f521c69a3d6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 20:46:58 crc kubenswrapper[4983]: I1125 20:46:58.547864 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eba27c66-d8be-4e3c-a39c-2f521c69a3d6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 20:46:59 crc kubenswrapper[4983]: I1125 20:46:59.580082 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 20:46:59 crc kubenswrapper[4983]: I1125 20:46:59.616109 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 20:46:59 crc kubenswrapper[4983]: I1125 20:46:59.893493 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:46:59 crc kubenswrapper[4983]: I1125 20:46:59.893975 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 20:47:00 crc kubenswrapper[4983]: I1125 20:47:00.308928 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 20:47:00 crc kubenswrapper[4983]: I1125 20:47:00.907717 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b199cea8-e855-4316-b80b-8cad8bce9f45" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 20:47:00 crc kubenswrapper[4983]: I1125 20:47:00.907805 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b199cea8-e855-4316-b80b-8cad8bce9f45" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 20:47:07 crc kubenswrapper[4983]: I1125 20:47:07.540342 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 20:47:07 crc kubenswrapper[4983]: I1125 20:47:07.543112 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 20:47:07 crc kubenswrapper[4983]: I1125 20:47:07.544364 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 20:47:08 crc kubenswrapper[4983]: I1125 20:47:08.309236 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 20:47:08 crc kubenswrapper[4983]: I1125 20:47:08.401067 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 20:47:09 crc kubenswrapper[4983]: I1125 20:47:09.904364 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 20:47:09 crc kubenswrapper[4983]: I1125 20:47:09.905676 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 20:47:09 crc kubenswrapper[4983]: I1125 20:47:09.906218 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 20:47:09 crc kubenswrapper[4983]: I1125 20:47:09.912009 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 20:47:09 crc kubenswrapper[4983]: I1125 20:47:09.927402 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:47:09 crc kubenswrapper[4983]: I1125 20:47:09.927455 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:47:10 crc kubenswrapper[4983]: I1125 20:47:10.414362 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 20:47:10 crc kubenswrapper[4983]: I1125 20:47:10.427768 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 20:47:18 crc kubenswrapper[4983]: I1125 20:47:18.320592 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:47:19 crc kubenswrapper[4983]: I1125 20:47:19.412580 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:47:23 crc kubenswrapper[4983]: I1125 20:47:23.367405 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerName="rabbitmq" containerID="cri-o://b99c1995b8ed69a77f165df12c8b5ed28d5ffae10924757fba6e34bd06d6bfe1" gracePeriod=604795 Nov 25 20:47:23 crc kubenswrapper[4983]: I1125 20:47:23.900998 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerName="rabbitmq" containerID="cri-o://845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74" gracePeriod=604796 Nov 25 20:47:29 crc kubenswrapper[4983]: I1125 20:47:29.671753 4983 generic.go:334] "Generic (PLEG): container finished" podID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerID="b99c1995b8ed69a77f165df12c8b5ed28d5ffae10924757fba6e34bd06d6bfe1" exitCode=0 Nov 25 20:47:29 crc kubenswrapper[4983]: I1125 20:47:29.671894 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90","Type":"ContainerDied","Data":"b99c1995b8ed69a77f165df12c8b5ed28d5ffae10924757fba6e34bd06d6bfe1"} Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.022130 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.052991 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbvz5\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-kube-api-access-vbvz5\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053113 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-config-data\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053144 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-server-conf\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053208 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-erlang-cookie\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053255 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-plugins\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053440 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-plugins-conf\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053496 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-pod-info\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053531 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053576 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-confd\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053727 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-erlang-cookie-secret\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.053807 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-tls\") pod \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\" (UID: \"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.054361 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.055066 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.055469 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.071669 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.072369 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.082870 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-kube-api-access-vbvz5" (OuterVolumeSpecName: "kube-api-access-vbvz5") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "kube-api-access-vbvz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.090851 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-pod-info" (OuterVolumeSpecName: "pod-info") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.091031 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.091864 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-config-data" (OuterVolumeSpecName: "config-data") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.156920 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.156962 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbvz5\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-kube-api-access-vbvz5\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.156979 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.156992 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.157003 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.157014 4983 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.157024 4983 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.157059 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.157070 4983 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.191005 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.191173 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-server-conf" (OuterVolumeSpecName: "server-conf") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.246282 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" (UID: "1bf4fae0-a5ca-48a8-9f99-5793a06f7f90"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.259886 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.259921 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.259934 4983 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.477729 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.671868 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-plugins\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.672024 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnhp\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-kube-api-access-nwnhp\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.673185 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.673823 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-server-conf\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.673959 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674060 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-config-data\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674139 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-confd\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674470 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-plugins-conf\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674538 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7aa78f0-48cd-4845-8a44-52fb63183dff-erlang-cookie-secret\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674586 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-tls\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674630 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-erlang-cookie\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.674710 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7aa78f0-48cd-4845-8a44-52fb63183dff-pod-info\") pod \"a7aa78f0-48cd-4845-8a44-52fb63183dff\" (UID: \"a7aa78f0-48cd-4845-8a44-52fb63183dff\") " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.675341 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.676402 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.676416 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.692216 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.692385 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aa78f0-48cd-4845-8a44-52fb63183dff-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.693268 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-kube-api-access-nwnhp" (OuterVolumeSpecName: "kube-api-access-nwnhp") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "kube-api-access-nwnhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.708027 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a7aa78f0-48cd-4845-8a44-52fb63183dff-pod-info" (OuterVolumeSpecName: "pod-info") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.715287 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.716085 4983 generic.go:334] "Generic (PLEG): container finished" podID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerID="845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74" exitCode=0 Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.716184 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a7aa78f0-48cd-4845-8a44-52fb63183dff","Type":"ContainerDied","Data":"845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74"} Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.716216 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a7aa78f0-48cd-4845-8a44-52fb63183dff","Type":"ContainerDied","Data":"3da00ffc871161710459784659b31ad739bed64b27d8484e0d5644fba9de9db1"} Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.716246 4983 scope.go:117] "RemoveContainer" containerID="845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.716439 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.729108 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1bf4fae0-a5ca-48a8-9f99-5793a06f7f90","Type":"ContainerDied","Data":"2d0fbdc9a8c1d37164b3aa2791005237f3c8fb1ccafa599c75c2ea03389ee36b"} Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.729306 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.763673 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-config-data" (OuterVolumeSpecName: "config-data") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777825 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777856 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777870 4983 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777881 4983 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7aa78f0-48cd-4845-8a44-52fb63183dff-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777892 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777933 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777943 4983 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7aa78f0-48cd-4845-8a44-52fb63183dff-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.777952 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnhp\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-kube-api-access-nwnhp\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.785198 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-server-conf" (OuterVolumeSpecName: "server-conf") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.806713 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.864662 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a7aa78f0-48cd-4845-8a44-52fb63183dff" (UID: "a7aa78f0-48cd-4845-8a44-52fb63183dff"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.880263 4983 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7aa78f0-48cd-4845-8a44-52fb63183dff-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.880302 4983 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7aa78f0-48cd-4845-8a44-52fb63183dff-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.880316 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.886462 4983 scope.go:117] "RemoveContainer" containerID="ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.891107 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.899625 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.920219 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:47:30 crc kubenswrapper[4983]: E1125 20:47:30.920610 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerName="setup-container" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.920634 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerName="setup-container" Nov 25 20:47:30 crc kubenswrapper[4983]: E1125 20:47:30.920656 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerName="rabbitmq" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.920664 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerName="rabbitmq" Nov 25 20:47:30 crc kubenswrapper[4983]: E1125 20:47:30.920676 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerName="setup-container" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.920683 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerName="setup-container" Nov 25 20:47:30 crc kubenswrapper[4983]: E1125 20:47:30.920717 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerName="rabbitmq" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.920723 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerName="rabbitmq" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.922525 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" containerName="rabbitmq" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.922578 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" containerName="rabbitmq" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.923509 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.925909 4983 scope.go:117] "RemoveContainer" containerID="845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926367 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 20:47:30 crc kubenswrapper[4983]: E1125 20:47:30.926431 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74\": container with ID starting with 845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74 not found: ID does not exist" containerID="845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926458 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74"} err="failed to get container status \"845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74\": rpc error: code = NotFound desc = could not find container \"845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74\": container with ID starting with 845ec252f968f8ddc1acb3acdaf962a7a06e4d53102a5f31b1127293a7259c74 not found: ID does not exist" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926486 4983 scope.go:117] "RemoveContainer" containerID="ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926733 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 20:47:30 crc kubenswrapper[4983]: E1125 20:47:30.926858 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227\": container with ID starting with ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227 not found: ID does not exist" containerID="ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926888 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227"} err="failed to get container status \"ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227\": rpc error: code = NotFound desc = could not find container \"ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227\": container with ID starting with ef9a8cd4d098d443b9edc0388e0cba204f522814ecbf14dc542f8da23f5eb227 not found: ID does not exist" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926901 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.926906 4983 scope.go:117] "RemoveContainer" containerID="b99c1995b8ed69a77f165df12c8b5ed28d5ffae10924757fba6e34bd06d6bfe1" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.927054 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.927235 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67ndd" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.927285 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.927314 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 20:47:30 crc kubenswrapper[4983]: I1125 20:47:30.943461 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.016499 4983 scope.go:117] "RemoveContainer" containerID="82321d33208ab6fde63d4a7f69525aa9078ba24736057f6bf87559c7b2c6d966" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.055746 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.062738 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.085962 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086214 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-config-data\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086363 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c25b\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-kube-api-access-9c25b\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086452 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086531 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df26c674-505f-44d6-9fd2-24d745739946-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086647 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086722 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086853 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.086973 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df26c674-505f-44d6-9fd2-24d745739946-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.087079 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.087191 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.087925 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.089601 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.091899 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.092008 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.092045 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.092917 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.093035 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.092945 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7mg2v" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.094045 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.117435 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.188714 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df26c674-505f-44d6-9fd2-24d745739946-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.188766 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.188825 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.188850 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.188871 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189030 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189108 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189163 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189251 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189330 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbf4\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-kube-api-access-4pbf4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189377 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189394 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df26c674-505f-44d6-9fd2-24d745739946-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189661 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189819 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189851 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189906 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189969 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.189995 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190034 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-config-data\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190050 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5063408-1226-4adc-86e9-194a32761df9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190106 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190133 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5063408-1226-4adc-86e9-194a32761df9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190157 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c25b\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-kube-api-access-9c25b\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190193 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.190633 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.191636 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.193110 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-config-data\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.193608 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/df26c674-505f-44d6-9fd2-24d745739946-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.193927 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.194231 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/df26c674-505f-44d6-9fd2-24d745739946-server-conf\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.194351 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/df26c674-505f-44d6-9fd2-24d745739946-pod-info\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.194840 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.212334 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c25b\" (UniqueName: \"kubernetes.io/projected/df26c674-505f-44d6-9fd2-24d745739946-kube-api-access-9c25b\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.231640 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"df26c674-505f-44d6-9fd2-24d745739946\") " pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291542 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291605 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291634 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291660 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbf4\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-kube-api-access-4pbf4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291692 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291736 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291758 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5063408-1226-4adc-86e9-194a32761df9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291776 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291794 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5063408-1226-4adc-86e9-194a32761df9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291827 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.291870 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.292059 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.292751 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.293644 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.294431 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.294727 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5063408-1226-4adc-86e9-194a32761df9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.295010 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.295808 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.300156 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.300364 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5063408-1226-4adc-86e9-194a32761df9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.301315 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5063408-1226-4adc-86e9-194a32761df9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.301328 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.317455 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbf4\" (UniqueName: \"kubernetes.io/projected/e5063408-1226-4adc-86e9-194a32761df9-kube-api-access-4pbf4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.327571 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e5063408-1226-4adc-86e9-194a32761df9\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.411778 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.631436 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf4fae0-a5ca-48a8-9f99-5793a06f7f90" path="/var/lib/kubelet/pods/1bf4fae0-a5ca-48a8-9f99-5793a06f7f90/volumes" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.633427 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7aa78f0-48cd-4845-8a44-52fb63183dff" path="/var/lib/kubelet/pods/a7aa78f0-48cd-4845-8a44-52fb63183dff/volumes" Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.827419 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 20:47:31 crc kubenswrapper[4983]: W1125 20:47:31.849172 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf26c674_505f_44d6_9fd2_24d745739946.slice/crio-a047fe790977047a1357973ad2424efd943d4de341354ab1fdd59131b4fd66ca WatchSource:0}: Error finding container a047fe790977047a1357973ad2424efd943d4de341354ab1fdd59131b4fd66ca: Status 404 returned error can't find the container with id a047fe790977047a1357973ad2424efd943d4de341354ab1fdd59131b4fd66ca Nov 25 20:47:31 crc kubenswrapper[4983]: I1125 20:47:31.971844 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.262381 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ttmpj"] Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.264288 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.266657 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.285679 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ttmpj"] Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.414529 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-config\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.414999 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.415045 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwwrf\" (UniqueName: \"kubernetes.io/projected/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-kube-api-access-fwwrf\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.415090 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.415127 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.415152 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.415185 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517127 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517229 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517267 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-config\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517338 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517378 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwwrf\" (UniqueName: \"kubernetes.io/projected/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-kube-api-access-fwwrf\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517433 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.517472 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.518639 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-config\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.518731 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.518734 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.518952 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.519577 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.519606 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.542503 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwwrf\" (UniqueName: \"kubernetes.io/projected/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-kube-api-access-fwwrf\") pod \"dnsmasq-dns-79bd4cc8c9-ttmpj\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.601463 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.755492 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df26c674-505f-44d6-9fd2-24d745739946","Type":"ContainerStarted","Data":"a047fe790977047a1357973ad2424efd943d4de341354ab1fdd59131b4fd66ca"} Nov 25 20:47:32 crc kubenswrapper[4983]: I1125 20:47:32.757967 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5063408-1226-4adc-86e9-194a32761df9","Type":"ContainerStarted","Data":"6567328cc95eda3093429e640dd710acbb8555298f2d7b3321dc46caddfe1844"} Nov 25 20:47:33 crc kubenswrapper[4983]: I1125 20:47:33.128878 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ttmpj"] Nov 25 20:47:33 crc kubenswrapper[4983]: I1125 20:47:33.773715 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" event={"ID":"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe","Type":"ContainerStarted","Data":"4dc67fe0f4424cce7e50cee474477bef3b86f74ccbc35a868f6e058f59dd2808"} Nov 25 20:47:34 crc kubenswrapper[4983]: I1125 20:47:34.794237 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df26c674-505f-44d6-9fd2-24d745739946","Type":"ContainerStarted","Data":"c59acafd938041a6a5ff0e3ad5db3f1fddf9b52998bcd4684cb49e8c045b72b9"} Nov 25 20:47:34 crc kubenswrapper[4983]: I1125 20:47:34.796693 4983 generic.go:334] "Generic (PLEG): container finished" podID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerID="e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741" exitCode=0 Nov 25 20:47:34 crc kubenswrapper[4983]: I1125 20:47:34.796869 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" event={"ID":"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe","Type":"ContainerDied","Data":"e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741"} Nov 25 20:47:34 crc kubenswrapper[4983]: I1125 20:47:34.800444 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5063408-1226-4adc-86e9-194a32761df9","Type":"ContainerStarted","Data":"77ac37e19483af498e7393deaafb4d2b60c91655f45ee63ba362290b012ef421"} Nov 25 20:47:35 crc kubenswrapper[4983]: I1125 20:47:35.815650 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" event={"ID":"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe","Type":"ContainerStarted","Data":"fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48"} Nov 25 20:47:35 crc kubenswrapper[4983]: I1125 20:47:35.816408 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:35 crc kubenswrapper[4983]: I1125 20:47:35.846915 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" podStartSLOduration=3.8468898080000002 podStartE2EDuration="3.846889808s" podCreationTimestamp="2025-11-25 20:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:47:35.841364742 +0000 UTC m=+1236.953898144" watchObservedRunningTime="2025-11-25 20:47:35.846889808 +0000 UTC m=+1236.959423200" Nov 25 20:47:39 crc kubenswrapper[4983]: I1125 20:47:39.928246 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:47:39 crc kubenswrapper[4983]: I1125 20:47:39.929508 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.603830 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.696666 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-qntk5"] Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.697193 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerName="dnsmasq-dns" containerID="cri-o://71612eefefe28389b5cb7ada9af1855aea8b04808f0f11856bc1b30fd0ba3fc4" gracePeriod=10 Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.826004 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-qb46c"] Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.832281 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.851888 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-qb46c"] Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894209 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7xp\" (UniqueName: \"kubernetes.io/projected/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-kube-api-access-xp7xp\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894304 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-dns-svc\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894362 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894392 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894418 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894445 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.894463 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-config\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.902146 4983 generic.go:334] "Generic (PLEG): container finished" podID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerID="71612eefefe28389b5cb7ada9af1855aea8b04808f0f11856bc1b30fd0ba3fc4" exitCode=0 Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.902204 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" event={"ID":"22e37849-509f-4fac-98a9-ab22a28c8c28","Type":"ContainerDied","Data":"71612eefefe28389b5cb7ada9af1855aea8b04808f0f11856bc1b30fd0ba3fc4"} Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996166 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7xp\" (UniqueName: \"kubernetes.io/projected/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-kube-api-access-xp7xp\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996258 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-dns-svc\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996310 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996338 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996362 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996387 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.996480 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-config\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.997364 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-dns-svc\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.997572 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.997603 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-config\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.997706 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.997768 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:42 crc kubenswrapper[4983]: I1125 20:47:42.998365 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.035744 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7xp\" (UniqueName: \"kubernetes.io/projected/1be61955-ba9d-4fef-8bb8-41bae01eb8a2-kube-api-access-xp7xp\") pod \"dnsmasq-dns-55478c4467-qb46c\" (UID: \"1be61955-ba9d-4fef-8bb8-41bae01eb8a2\") " pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.191621 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.222223 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.404409 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-nb\") pod \"22e37849-509f-4fac-98a9-ab22a28c8c28\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.404454 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-svc\") pod \"22e37849-509f-4fac-98a9-ab22a28c8c28\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.404568 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-config\") pod \"22e37849-509f-4fac-98a9-ab22a28c8c28\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.404663 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-swift-storage-0\") pod \"22e37849-509f-4fac-98a9-ab22a28c8c28\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.404731 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qckp\" (UniqueName: \"kubernetes.io/projected/22e37849-509f-4fac-98a9-ab22a28c8c28-kube-api-access-9qckp\") pod \"22e37849-509f-4fac-98a9-ab22a28c8c28\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.404781 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-sb\") pod \"22e37849-509f-4fac-98a9-ab22a28c8c28\" (UID: \"22e37849-509f-4fac-98a9-ab22a28c8c28\") " Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.413046 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e37849-509f-4fac-98a9-ab22a28c8c28-kube-api-access-9qckp" (OuterVolumeSpecName: "kube-api-access-9qckp") pod "22e37849-509f-4fac-98a9-ab22a28c8c28" (UID: "22e37849-509f-4fac-98a9-ab22a28c8c28"). InnerVolumeSpecName "kube-api-access-9qckp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.469034 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22e37849-509f-4fac-98a9-ab22a28c8c28" (UID: "22e37849-509f-4fac-98a9-ab22a28c8c28"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.472571 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22e37849-509f-4fac-98a9-ab22a28c8c28" (UID: "22e37849-509f-4fac-98a9-ab22a28c8c28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.479695 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22e37849-509f-4fac-98a9-ab22a28c8c28" (UID: "22e37849-509f-4fac-98a9-ab22a28c8c28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.494078 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22e37849-509f-4fac-98a9-ab22a28c8c28" (UID: "22e37849-509f-4fac-98a9-ab22a28c8c28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.494149 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-config" (OuterVolumeSpecName: "config") pod "22e37849-509f-4fac-98a9-ab22a28c8c28" (UID: "22e37849-509f-4fac-98a9-ab22a28c8c28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.507504 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.507538 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.507566 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.507580 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.507594 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qckp\" (UniqueName: \"kubernetes.io/projected/22e37849-509f-4fac-98a9-ab22a28c8c28-kube-api-access-9qckp\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.507608 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22e37849-509f-4fac-98a9-ab22a28c8c28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.756180 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-qb46c"] Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.934403 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-qb46c" event={"ID":"1be61955-ba9d-4fef-8bb8-41bae01eb8a2","Type":"ContainerStarted","Data":"1b63c4e3cd4e2b0faaa2949214d587e79471992f2a20beacdeecc3d79d599640"} Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.939972 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" event={"ID":"22e37849-509f-4fac-98a9-ab22a28c8c28","Type":"ContainerDied","Data":"b95b23ede550795d437b4cdfe7a862810f2d9143b65924d90ebc17b25dea309a"} Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.940544 4983 scope.go:117] "RemoveContainer" containerID="71612eefefe28389b5cb7ada9af1855aea8b04808f0f11856bc1b30fd0ba3fc4" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.940104 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-qntk5" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.978622 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-qntk5"] Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.985874 4983 scope.go:117] "RemoveContainer" containerID="222fb86698ce196e2fab5b14ed821288a57111881c75e6cd182e8e8cece44b16" Nov 25 20:47:43 crc kubenswrapper[4983]: I1125 20:47:43.988291 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-qntk5"] Nov 25 20:47:44 crc kubenswrapper[4983]: I1125 20:47:44.958360 4983 generic.go:334] "Generic (PLEG): container finished" podID="1be61955-ba9d-4fef-8bb8-41bae01eb8a2" containerID="be8a885a93ebd6f13aec263aa00123c8f542dc4dd0f80224418dfb544736b950" exitCode=0 Nov 25 20:47:44 crc kubenswrapper[4983]: I1125 20:47:44.958443 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-qb46c" event={"ID":"1be61955-ba9d-4fef-8bb8-41bae01eb8a2","Type":"ContainerDied","Data":"be8a885a93ebd6f13aec263aa00123c8f542dc4dd0f80224418dfb544736b950"} Nov 25 20:47:45 crc kubenswrapper[4983]: I1125 20:47:45.618576 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" path="/var/lib/kubelet/pods/22e37849-509f-4fac-98a9-ab22a28c8c28/volumes" Nov 25 20:47:45 crc kubenswrapper[4983]: I1125 20:47:45.979867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-qb46c" event={"ID":"1be61955-ba9d-4fef-8bb8-41bae01eb8a2","Type":"ContainerStarted","Data":"bf5683c75e39c207bb8bf9de7e6c89d1c47281329458ceea20f89f58da7295d8"} Nov 25 20:47:45 crc kubenswrapper[4983]: I1125 20:47:45.980310 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:46 crc kubenswrapper[4983]: I1125 20:47:46.033859 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-qb46c" podStartSLOduration=4.033823367 podStartE2EDuration="4.033823367s" podCreationTimestamp="2025-11-25 20:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:47:46.029196424 +0000 UTC m=+1247.141729826" watchObservedRunningTime="2025-11-25 20:47:46.033823367 +0000 UTC m=+1247.146356789" Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.194979 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-qb46c" Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.284049 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ttmpj"] Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.284468 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerName="dnsmasq-dns" containerID="cri-o://fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48" gracePeriod=10 Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.815217 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.939921 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-swift-storage-0\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.940022 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-config\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.940075 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwwrf\" (UniqueName: \"kubernetes.io/projected/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-kube-api-access-fwwrf\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.940129 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-openstack-edpm-ipam\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.940167 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-nb\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.940221 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-svc\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.940281 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-sb\") pod \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\" (UID: \"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe\") " Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.949277 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-kube-api-access-fwwrf" (OuterVolumeSpecName: "kube-api-access-fwwrf") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "kube-api-access-fwwrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.999275 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:53 crc kubenswrapper[4983]: I1125 20:47:53.999732 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-config" (OuterVolumeSpecName: "config") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.001240 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.002094 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.011972 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.032544 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" (UID: "8d58b2fa-6d2a-4a65-af9c-31ffde174bfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043605 4983 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043640 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043653 4983 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043663 4983 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-config\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043674 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwwrf\" (UniqueName: \"kubernetes.io/projected/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-kube-api-access-fwwrf\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043685 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.043694 4983 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.081161 4983 generic.go:334] "Generic (PLEG): container finished" podID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerID="fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48" exitCode=0 Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.081259 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" event={"ID":"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe","Type":"ContainerDied","Data":"fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48"} Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.081315 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" event={"ID":"8d58b2fa-6d2a-4a65-af9c-31ffde174bfe","Type":"ContainerDied","Data":"4dc67fe0f4424cce7e50cee474477bef3b86f74ccbc35a868f6e058f59dd2808"} Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.081344 4983 scope.go:117] "RemoveContainer" containerID="fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.081363 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-ttmpj" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.105226 4983 scope.go:117] "RemoveContainer" containerID="e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.142634 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ttmpj"] Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.149798 4983 scope.go:117] "RemoveContainer" containerID="fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48" Nov 25 20:47:54 crc kubenswrapper[4983]: E1125 20:47:54.152811 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48\": container with ID starting with fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48 not found: ID does not exist" containerID="fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.152887 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48"} err="failed to get container status \"fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48\": rpc error: code = NotFound desc = could not find container \"fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48\": container with ID starting with fce18f8d310bf4fa19adbde42557fdec23ab0d2882a351bc1811729765ac6b48 not found: ID does not exist" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.152926 4983 scope.go:117] "RemoveContainer" containerID="e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741" Nov 25 20:47:54 crc kubenswrapper[4983]: E1125 20:47:54.153388 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741\": container with ID starting with e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741 not found: ID does not exist" containerID="e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.153416 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741"} err="failed to get container status \"e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741\": rpc error: code = NotFound desc = could not find container \"e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741\": container with ID starting with e483ab2e90b61f9ff098449c3a391e1ad148abf96948b7123e27a20cc325d741 not found: ID does not exist" Nov 25 20:47:54 crc kubenswrapper[4983]: I1125 20:47:54.154055 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-ttmpj"] Nov 25 20:47:55 crc kubenswrapper[4983]: I1125 20:47:55.622102 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" path="/var/lib/kubelet/pods/8d58b2fa-6d2a-4a65-af9c-31ffde174bfe/volumes" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.816774 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz"] Nov 25 20:48:06 crc kubenswrapper[4983]: E1125 20:48:06.818412 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerName="dnsmasq-dns" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.818442 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerName="dnsmasq-dns" Nov 25 20:48:06 crc kubenswrapper[4983]: E1125 20:48:06.818469 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerName="init" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.818482 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerName="init" Nov 25 20:48:06 crc kubenswrapper[4983]: E1125 20:48:06.818538 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerName="dnsmasq-dns" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.818550 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerName="dnsmasq-dns" Nov 25 20:48:06 crc kubenswrapper[4983]: E1125 20:48:06.818630 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerName="init" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.818642 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerName="init" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.818973 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d58b2fa-6d2a-4a65-af9c-31ffde174bfe" containerName="dnsmasq-dns" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.819029 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e37849-509f-4fac-98a9-ab22a28c8c28" containerName="dnsmasq-dns" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.820291 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.824719 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.824979 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.825062 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.826473 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.836994 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz"] Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.913835 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.914379 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jqn\" (UniqueName: \"kubernetes.io/projected/599f17a5-8483-4c0e-aca0-27677abeba08-kube-api-access-x2jqn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.914615 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:06 crc kubenswrapper[4983]: I1125 20:48:06.914770 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.017186 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.017714 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jqn\" (UniqueName: \"kubernetes.io/projected/599f17a5-8483-4c0e-aca0-27677abeba08-kube-api-access-x2jqn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.017904 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.018071 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.025793 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.026952 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.029254 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.048451 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jqn\" (UniqueName: \"kubernetes.io/projected/599f17a5-8483-4c0e-aca0-27677abeba08-kube-api-access-x2jqn\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.152484 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.266772 4983 generic.go:334] "Generic (PLEG): container finished" podID="df26c674-505f-44d6-9fd2-24d745739946" containerID="c59acafd938041a6a5ff0e3ad5db3f1fddf9b52998bcd4684cb49e8c045b72b9" exitCode=0 Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.266858 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df26c674-505f-44d6-9fd2-24d745739946","Type":"ContainerDied","Data":"c59acafd938041a6a5ff0e3ad5db3f1fddf9b52998bcd4684cb49e8c045b72b9"} Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.784727 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz"] Nov 25 20:48:07 crc kubenswrapper[4983]: W1125 20:48:07.788217 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod599f17a5_8483_4c0e_aca0_27677abeba08.slice/crio-8676ff376d9e7d096e44a42bac5014b51b814817d5537fac793bdef8e1f2389b WatchSource:0}: Error finding container 8676ff376d9e7d096e44a42bac5014b51b814817d5537fac793bdef8e1f2389b: Status 404 returned error can't find the container with id 8676ff376d9e7d096e44a42bac5014b51b814817d5537fac793bdef8e1f2389b Nov 25 20:48:07 crc kubenswrapper[4983]: I1125 20:48:07.790525 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:48:08 crc kubenswrapper[4983]: I1125 20:48:08.282262 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" event={"ID":"599f17a5-8483-4c0e-aca0-27677abeba08","Type":"ContainerStarted","Data":"8676ff376d9e7d096e44a42bac5014b51b814817d5537fac793bdef8e1f2389b"} Nov 25 20:48:08 crc kubenswrapper[4983]: I1125 20:48:08.286601 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"df26c674-505f-44d6-9fd2-24d745739946","Type":"ContainerStarted","Data":"6891012f072de7e5d7397fafc14095aa9012e8abcc4af757f836d6c22985b1d4"} Nov 25 20:48:08 crc kubenswrapper[4983]: I1125 20:48:08.286907 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 20:48:08 crc kubenswrapper[4983]: I1125 20:48:08.289009 4983 generic.go:334] "Generic (PLEG): container finished" podID="e5063408-1226-4adc-86e9-194a32761df9" containerID="77ac37e19483af498e7393deaafb4d2b60c91655f45ee63ba362290b012ef421" exitCode=0 Nov 25 20:48:08 crc kubenswrapper[4983]: I1125 20:48:08.289060 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5063408-1226-4adc-86e9-194a32761df9","Type":"ContainerDied","Data":"77ac37e19483af498e7393deaafb4d2b60c91655f45ee63ba362290b012ef421"} Nov 25 20:48:08 crc kubenswrapper[4983]: I1125 20:48:08.319913 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.319891044 podStartE2EDuration="38.319891044s" podCreationTimestamp="2025-11-25 20:47:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:48:08.318969739 +0000 UTC m=+1269.431503151" watchObservedRunningTime="2025-11-25 20:48:08.319891044 +0000 UTC m=+1269.432424436" Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.302422 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e5063408-1226-4adc-86e9-194a32761df9","Type":"ContainerStarted","Data":"65c841bdbb2d7492ead816b52454b8c7f4207824be3d03ceb901d24a167826d8"} Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.303384 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.341343 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.341320657 podStartE2EDuration="38.341320657s" podCreationTimestamp="2025-11-25 20:47:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:48:09.339298204 +0000 UTC m=+1270.451831626" watchObservedRunningTime="2025-11-25 20:48:09.341320657 +0000 UTC m=+1270.453854049" Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.927851 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.927918 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.927966 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.928666 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"564c2d7b04bb43d995119a30a67c66d1a1f25eab8467f75e61575755980ee6c6"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:48:09 crc kubenswrapper[4983]: I1125 20:48:09.928722 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://564c2d7b04bb43d995119a30a67c66d1a1f25eab8467f75e61575755980ee6c6" gracePeriod=600 Nov 25 20:48:10 crc kubenswrapper[4983]: I1125 20:48:10.333097 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="564c2d7b04bb43d995119a30a67c66d1a1f25eab8467f75e61575755980ee6c6" exitCode=0 Nov 25 20:48:10 crc kubenswrapper[4983]: I1125 20:48:10.333265 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"564c2d7b04bb43d995119a30a67c66d1a1f25eab8467f75e61575755980ee6c6"} Nov 25 20:48:10 crc kubenswrapper[4983]: I1125 20:48:10.333892 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"e12df31370c6ce33dc30cef4c0a5235025ed26a0ae83ddc51872ed125d9d82bb"} Nov 25 20:48:10 crc kubenswrapper[4983]: I1125 20:48:10.333967 4983 scope.go:117] "RemoveContainer" containerID="02a7a7ce01bacff8c2eff18d797a1189b8fa10fb78c41ac31562d8f18df21be8" Nov 25 20:48:19 crc kubenswrapper[4983]: I1125 20:48:19.447933 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" event={"ID":"599f17a5-8483-4c0e-aca0-27677abeba08","Type":"ContainerStarted","Data":"5675f91439b3dbc0900f810a89163bd001f910498c21ebac6cd4f141576fcdaa"} Nov 25 20:48:19 crc kubenswrapper[4983]: I1125 20:48:19.480328 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" podStartSLOduration=2.675358952 podStartE2EDuration="13.480309067s" podCreationTimestamp="2025-11-25 20:48:06 +0000 UTC" firstStartedPulling="2025-11-25 20:48:07.790348539 +0000 UTC m=+1268.902881931" lastFinishedPulling="2025-11-25 20:48:18.595298644 +0000 UTC m=+1279.707832046" observedRunningTime="2025-11-25 20:48:19.470321743 +0000 UTC m=+1280.582855135" watchObservedRunningTime="2025-11-25 20:48:19.480309067 +0000 UTC m=+1280.592842459" Nov 25 20:48:21 crc kubenswrapper[4983]: I1125 20:48:21.300880 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 20:48:21 crc kubenswrapper[4983]: I1125 20:48:21.419929 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 20:48:30 crc kubenswrapper[4983]: I1125 20:48:30.592330 4983 generic.go:334] "Generic (PLEG): container finished" podID="599f17a5-8483-4c0e-aca0-27677abeba08" containerID="5675f91439b3dbc0900f810a89163bd001f910498c21ebac6cd4f141576fcdaa" exitCode=0 Nov 25 20:48:30 crc kubenswrapper[4983]: I1125 20:48:30.592454 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" event={"ID":"599f17a5-8483-4c0e-aca0-27677abeba08","Type":"ContainerDied","Data":"5675f91439b3dbc0900f810a89163bd001f910498c21ebac6cd4f141576fcdaa"} Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.190395 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.247603 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-ssh-key\") pod \"599f17a5-8483-4c0e-aca0-27677abeba08\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.247837 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-inventory\") pod \"599f17a5-8483-4c0e-aca0-27677abeba08\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.248059 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-repo-setup-combined-ca-bundle\") pod \"599f17a5-8483-4c0e-aca0-27677abeba08\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.248149 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2jqn\" (UniqueName: \"kubernetes.io/projected/599f17a5-8483-4c0e-aca0-27677abeba08-kube-api-access-x2jqn\") pod \"599f17a5-8483-4c0e-aca0-27677abeba08\" (UID: \"599f17a5-8483-4c0e-aca0-27677abeba08\") " Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.258021 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599f17a5-8483-4c0e-aca0-27677abeba08-kube-api-access-x2jqn" (OuterVolumeSpecName: "kube-api-access-x2jqn") pod "599f17a5-8483-4c0e-aca0-27677abeba08" (UID: "599f17a5-8483-4c0e-aca0-27677abeba08"). InnerVolumeSpecName "kube-api-access-x2jqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.263653 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "599f17a5-8483-4c0e-aca0-27677abeba08" (UID: "599f17a5-8483-4c0e-aca0-27677abeba08"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.293419 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-inventory" (OuterVolumeSpecName: "inventory") pod "599f17a5-8483-4c0e-aca0-27677abeba08" (UID: "599f17a5-8483-4c0e-aca0-27677abeba08"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.301786 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "599f17a5-8483-4c0e-aca0-27677abeba08" (UID: "599f17a5-8483-4c0e-aca0-27677abeba08"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.350788 4983 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.350838 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2jqn\" (UniqueName: \"kubernetes.io/projected/599f17a5-8483-4c0e-aca0-27677abeba08-kube-api-access-x2jqn\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.350854 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.350869 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599f17a5-8483-4c0e-aca0-27677abeba08-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.618939 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" event={"ID":"599f17a5-8483-4c0e-aca0-27677abeba08","Type":"ContainerDied","Data":"8676ff376d9e7d096e44a42bac5014b51b814817d5537fac793bdef8e1f2389b"} Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.618987 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8676ff376d9e7d096e44a42bac5014b51b814817d5537fac793bdef8e1f2389b" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.619051 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.732453 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn"] Nov 25 20:48:32 crc kubenswrapper[4983]: E1125 20:48:32.732898 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599f17a5-8483-4c0e-aca0-27677abeba08" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.732916 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="599f17a5-8483-4c0e-aca0-27677abeba08" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.733129 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="599f17a5-8483-4c0e-aca0-27677abeba08" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.733779 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.736418 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.736745 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.738056 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.740338 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.747421 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn"] Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.862105 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.862604 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7j4\" (UniqueName: \"kubernetes.io/projected/f1c31e5c-0dca-4993-b845-286b47b3b6ee-kube-api-access-jt7j4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.862729 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.964991 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.965468 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.965635 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7j4\" (UniqueName: \"kubernetes.io/projected/f1c31e5c-0dca-4993-b845-286b47b3b6ee-kube-api-access-jt7j4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.973562 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.973651 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:32 crc kubenswrapper[4983]: I1125 20:48:32.991375 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7j4\" (UniqueName: \"kubernetes.io/projected/f1c31e5c-0dca-4993-b845-286b47b3b6ee-kube-api-access-jt7j4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5csrn\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:33 crc kubenswrapper[4983]: I1125 20:48:33.054140 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:33 crc kubenswrapper[4983]: I1125 20:48:33.693366 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn"] Nov 25 20:48:34 crc kubenswrapper[4983]: I1125 20:48:34.642703 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" event={"ID":"f1c31e5c-0dca-4993-b845-286b47b3b6ee","Type":"ContainerStarted","Data":"a36ee43b0bc8de8d181653898ee3f37b29dd4ed6fd22adbff3032aeb4e8a4281"} Nov 25 20:48:34 crc kubenswrapper[4983]: I1125 20:48:34.643961 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" event={"ID":"f1c31e5c-0dca-4993-b845-286b47b3b6ee","Type":"ContainerStarted","Data":"e620e440897ad2ebd091d001c21e2e3b2e78ded31635412f0d7706de6eaea782"} Nov 25 20:48:34 crc kubenswrapper[4983]: I1125 20:48:34.667974 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" podStartSLOduration=2.122471449 podStartE2EDuration="2.667948606s" podCreationTimestamp="2025-11-25 20:48:32 +0000 UTC" firstStartedPulling="2025-11-25 20:48:33.722881913 +0000 UTC m=+1294.835415305" lastFinishedPulling="2025-11-25 20:48:34.26835907 +0000 UTC m=+1295.380892462" observedRunningTime="2025-11-25 20:48:34.657749896 +0000 UTC m=+1295.770283298" watchObservedRunningTime="2025-11-25 20:48:34.667948606 +0000 UTC m=+1295.780481998" Nov 25 20:48:37 crc kubenswrapper[4983]: I1125 20:48:37.683480 4983 generic.go:334] "Generic (PLEG): container finished" podID="f1c31e5c-0dca-4993-b845-286b47b3b6ee" containerID="a36ee43b0bc8de8d181653898ee3f37b29dd4ed6fd22adbff3032aeb4e8a4281" exitCode=0 Nov 25 20:48:37 crc kubenswrapper[4983]: I1125 20:48:37.683631 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" event={"ID":"f1c31e5c-0dca-4993-b845-286b47b3b6ee","Type":"ContainerDied","Data":"a36ee43b0bc8de8d181653898ee3f37b29dd4ed6fd22adbff3032aeb4e8a4281"} Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.256496 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.345018 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-inventory\") pod \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.345415 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-ssh-key\") pod \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.345530 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt7j4\" (UniqueName: \"kubernetes.io/projected/f1c31e5c-0dca-4993-b845-286b47b3b6ee-kube-api-access-jt7j4\") pod \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\" (UID: \"f1c31e5c-0dca-4993-b845-286b47b3b6ee\") " Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.353112 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c31e5c-0dca-4993-b845-286b47b3b6ee-kube-api-access-jt7j4" (OuterVolumeSpecName: "kube-api-access-jt7j4") pod "f1c31e5c-0dca-4993-b845-286b47b3b6ee" (UID: "f1c31e5c-0dca-4993-b845-286b47b3b6ee"). InnerVolumeSpecName "kube-api-access-jt7j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.390878 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1c31e5c-0dca-4993-b845-286b47b3b6ee" (UID: "f1c31e5c-0dca-4993-b845-286b47b3b6ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.393565 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-inventory" (OuterVolumeSpecName: "inventory") pod "f1c31e5c-0dca-4993-b845-286b47b3b6ee" (UID: "f1c31e5c-0dca-4993-b845-286b47b3b6ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.447355 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.447510 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt7j4\" (UniqueName: \"kubernetes.io/projected/f1c31e5c-0dca-4993-b845-286b47b3b6ee-kube-api-access-jt7j4\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.447611 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1c31e5c-0dca-4993-b845-286b47b3b6ee-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.729092 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" event={"ID":"f1c31e5c-0dca-4993-b845-286b47b3b6ee","Type":"ContainerDied","Data":"e620e440897ad2ebd091d001c21e2e3b2e78ded31635412f0d7706de6eaea782"} Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.729144 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e620e440897ad2ebd091d001c21e2e3b2e78ded31635412f0d7706de6eaea782" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.729232 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5csrn" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.915362 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h"] Nov 25 20:48:39 crc kubenswrapper[4983]: E1125 20:48:39.916731 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c31e5c-0dca-4993-b845-286b47b3b6ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.916750 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c31e5c-0dca-4993-b845-286b47b3b6ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.916978 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c31e5c-0dca-4993-b845-286b47b3b6ee" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.917722 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.920268 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.920581 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.921413 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.924235 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:48:39 crc kubenswrapper[4983]: I1125 20:48:39.943089 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h"] Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.062507 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.062674 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.062738 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5z4\" (UniqueName: \"kubernetes.io/projected/96bb1f23-94d5-4a68-995b-da2394c75158-kube-api-access-qm5z4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.062799 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.165180 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.165298 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm5z4\" (UniqueName: \"kubernetes.io/projected/96bb1f23-94d5-4a68-995b-da2394c75158-kube-api-access-qm5z4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.165358 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.165462 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.172115 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.174317 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.179416 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.205500 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm5z4\" (UniqueName: \"kubernetes.io/projected/96bb1f23-94d5-4a68-995b-da2394c75158-kube-api-access-qm5z4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.236101 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:48:40 crc kubenswrapper[4983]: I1125 20:48:40.827782 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h"] Nov 25 20:48:41 crc kubenswrapper[4983]: I1125 20:48:41.754122 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" event={"ID":"96bb1f23-94d5-4a68-995b-da2394c75158","Type":"ContainerStarted","Data":"e455a7e7108c703f76a586808b0c193129631df4c8605c0f0ccf9f14a3d36282"} Nov 25 20:48:41 crc kubenswrapper[4983]: I1125 20:48:41.755032 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" event={"ID":"96bb1f23-94d5-4a68-995b-da2394c75158","Type":"ContainerStarted","Data":"161ded70eff042a068c8bab4e63a24f12667790d0f371edc45f3815bfc6c0726"} Nov 25 20:48:41 crc kubenswrapper[4983]: I1125 20:48:41.792356 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" podStartSLOduration=2.3697319869999998 podStartE2EDuration="2.792325751s" podCreationTimestamp="2025-11-25 20:48:39 +0000 UTC" firstStartedPulling="2025-11-25 20:48:40.827895616 +0000 UTC m=+1301.940429008" lastFinishedPulling="2025-11-25 20:48:41.25048938 +0000 UTC m=+1302.363022772" observedRunningTime="2025-11-25 20:48:41.781237687 +0000 UTC m=+1302.893771119" watchObservedRunningTime="2025-11-25 20:48:41.792325751 +0000 UTC m=+1302.904859163" Nov 25 20:49:18 crc kubenswrapper[4983]: I1125 20:49:18.612186 4983 scope.go:117] "RemoveContainer" containerID="8cb540a8d12319e78e1bc0401fd3e943854bed3121d8bff8563ba5766b6eadcf" Nov 25 20:49:18 crc kubenswrapper[4983]: I1125 20:49:18.667013 4983 scope.go:117] "RemoveContainer" containerID="030e5276954d403bcf167c102210d492a52de96aed5bafbf27bd6e9fb2a09633" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.233708 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftbv4"] Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.237338 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.254046 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftbv4"] Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.380875 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-catalog-content\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.380999 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-utilities\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.381064 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74zg9\" (UniqueName: \"kubernetes.io/projected/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-kube-api-access-74zg9\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.482683 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-catalog-content\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.482746 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-utilities\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.482819 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74zg9\" (UniqueName: \"kubernetes.io/projected/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-kube-api-access-74zg9\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.483827 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-utilities\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.484015 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-catalog-content\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.507185 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74zg9\" (UniqueName: \"kubernetes.io/projected/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-kube-api-access-74zg9\") pod \"community-operators-ftbv4\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:03 crc kubenswrapper[4983]: I1125 20:50:03.602779 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:04 crc kubenswrapper[4983]: I1125 20:50:04.136148 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftbv4"] Nov 25 20:50:04 crc kubenswrapper[4983]: I1125 20:50:04.269030 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerStarted","Data":"9bd4a702c17d2205b794a46fa00a88a1c7e5657a14faf73b429160617f35c852"} Nov 25 20:50:05 crc kubenswrapper[4983]: I1125 20:50:05.284766 4983 generic.go:334] "Generic (PLEG): container finished" podID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerID="d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3" exitCode=0 Nov 25 20:50:05 crc kubenswrapper[4983]: I1125 20:50:05.285104 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerDied","Data":"d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3"} Nov 25 20:50:06 crc kubenswrapper[4983]: I1125 20:50:06.301394 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerStarted","Data":"bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7"} Nov 25 20:50:07 crc kubenswrapper[4983]: I1125 20:50:07.317857 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerDied","Data":"bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7"} Nov 25 20:50:07 crc kubenswrapper[4983]: I1125 20:50:07.318061 4983 generic.go:334] "Generic (PLEG): container finished" podID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerID="bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7" exitCode=0 Nov 25 20:50:08 crc kubenswrapper[4983]: I1125 20:50:08.331928 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerStarted","Data":"557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103"} Nov 25 20:50:08 crc kubenswrapper[4983]: I1125 20:50:08.355679 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftbv4" podStartSLOduration=2.8845889099999997 podStartE2EDuration="5.35565471s" podCreationTimestamp="2025-11-25 20:50:03 +0000 UTC" firstStartedPulling="2025-11-25 20:50:05.288732771 +0000 UTC m=+1386.401266193" lastFinishedPulling="2025-11-25 20:50:07.759798601 +0000 UTC m=+1388.872331993" observedRunningTime="2025-11-25 20:50:08.351182182 +0000 UTC m=+1389.463715574" watchObservedRunningTime="2025-11-25 20:50:08.35565471 +0000 UTC m=+1389.468188102" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.512738 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.515625 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.519058 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.519127 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.523436 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.523517 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.545608 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.625299 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.625397 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.625739 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.649540 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:10 crc kubenswrapper[4983]: I1125 20:50:10.888096 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:11 crc kubenswrapper[4983]: I1125 20:50:11.382186 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 20:50:12 crc kubenswrapper[4983]: I1125 20:50:12.394175 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f9502aa-b63a-4c18-9739-94a15fd64a3e","Type":"ContainerStarted","Data":"695627c96852a793c9e7be164d50cbf83c602edef42d7dfb913005d189f1c388"} Nov 25 20:50:12 crc kubenswrapper[4983]: I1125 20:50:12.394945 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f9502aa-b63a-4c18-9739-94a15fd64a3e","Type":"ContainerStarted","Data":"dbe08a2242e5812c8378c451ceb5f6b94a1d3542072cf764c7fc7b0cb551a2a2"} Nov 25 20:50:12 crc kubenswrapper[4983]: I1125 20:50:12.436168 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.436147835 podStartE2EDuration="2.436147835s" podCreationTimestamp="2025-11-25 20:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:50:12.428047801 +0000 UTC m=+1393.540581203" watchObservedRunningTime="2025-11-25 20:50:12.436147835 +0000 UTC m=+1393.548681227" Nov 25 20:50:13 crc kubenswrapper[4983]: I1125 20:50:13.412664 4983 generic.go:334] "Generic (PLEG): container finished" podID="6f9502aa-b63a-4c18-9739-94a15fd64a3e" containerID="695627c96852a793c9e7be164d50cbf83c602edef42d7dfb913005d189f1c388" exitCode=0 Nov 25 20:50:13 crc kubenswrapper[4983]: I1125 20:50:13.412765 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f9502aa-b63a-4c18-9739-94a15fd64a3e","Type":"ContainerDied","Data":"695627c96852a793c9e7be164d50cbf83c602edef42d7dfb913005d189f1c388"} Nov 25 20:50:13 crc kubenswrapper[4983]: I1125 20:50:13.604082 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:13 crc kubenswrapper[4983]: I1125 20:50:13.604231 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:13 crc kubenswrapper[4983]: I1125 20:50:13.672784 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:14 crc kubenswrapper[4983]: I1125 20:50:14.536829 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:14 crc kubenswrapper[4983]: I1125 20:50:14.619314 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftbv4"] Nov 25 20:50:14 crc kubenswrapper[4983]: I1125 20:50:14.879771 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.033948 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kubelet-dir\") pod \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.034082 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f9502aa-b63a-4c18-9739-94a15fd64a3e" (UID: "6f9502aa-b63a-4c18-9739-94a15fd64a3e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.034741 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kube-api-access\") pod \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\" (UID: \"6f9502aa-b63a-4c18-9739-94a15fd64a3e\") " Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.035487 4983 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.043936 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f9502aa-b63a-4c18-9739-94a15fd64a3e" (UID: "6f9502aa-b63a-4c18-9739-94a15fd64a3e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.138542 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f9502aa-b63a-4c18-9739-94a15fd64a3e-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.450860 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.450935 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6f9502aa-b63a-4c18-9739-94a15fd64a3e","Type":"ContainerDied","Data":"dbe08a2242e5812c8378c451ceb5f6b94a1d3542072cf764c7fc7b0cb551a2a2"} Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.451144 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbe08a2242e5812c8378c451ceb5f6b94a1d3542072cf764c7fc7b0cb551a2a2" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.562075 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 20:50:15 crc kubenswrapper[4983]: E1125 20:50:15.562802 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9502aa-b63a-4c18-9739-94a15fd64a3e" containerName="pruner" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.562823 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9502aa-b63a-4c18-9739-94a15fd64a3e" containerName="pruner" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.563120 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9502aa-b63a-4c18-9739-94a15fd64a3e" containerName="pruner" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.564200 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.567675 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.569065 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.592640 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 20:50:15 crc kubenswrapper[4983]: E1125 20:50:15.639376 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6f9502aa_b63a_4c18_9739_94a15fd64a3e.slice/crio-dbe08a2242e5812c8378c451ceb5f6b94a1d3542072cf764c7fc7b0cb551a2a2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod6f9502aa_b63a_4c18_9739_94a15fd64a3e.slice\": RecentStats: unable to find data in memory cache]" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.654121 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-var-lock\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.654211 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.654264 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2745ce-1570-4841-8110-1249c0f897e7-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.756863 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.756943 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2745ce-1570-4841-8110-1249c0f897e7-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.757035 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.757075 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-var-lock\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.757126 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-var-lock\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.789511 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2745ce-1570-4841-8110-1249c0f897e7-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:15 crc kubenswrapper[4983]: I1125 20:50:15.892150 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:16 crc kubenswrapper[4983]: W1125 20:50:16.393640 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podea2745ce_1570_4841_8110_1249c0f897e7.slice/crio-e38bda5c8d45639c91ce3f2cfc4255e23b2f1bcbf9d91538ef2277b60fe3068d WatchSource:0}: Error finding container e38bda5c8d45639c91ce3f2cfc4255e23b2f1bcbf9d91538ef2277b60fe3068d: Status 404 returned error can't find the container with id e38bda5c8d45639c91ce3f2cfc4255e23b2f1bcbf9d91538ef2277b60fe3068d Nov 25 20:50:16 crc kubenswrapper[4983]: I1125 20:50:16.399936 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 20:50:16 crc kubenswrapper[4983]: I1125 20:50:16.464877 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea2745ce-1570-4841-8110-1249c0f897e7","Type":"ContainerStarted","Data":"e38bda5c8d45639c91ce3f2cfc4255e23b2f1bcbf9d91538ef2277b60fe3068d"} Nov 25 20:50:16 crc kubenswrapper[4983]: I1125 20:50:16.465221 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftbv4" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="registry-server" containerID="cri-o://557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103" gracePeriod=2 Nov 25 20:50:16 crc kubenswrapper[4983]: I1125 20:50:16.978159 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.107306 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-utilities\") pod \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.107830 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74zg9\" (UniqueName: \"kubernetes.io/projected/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-kube-api-access-74zg9\") pod \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.107863 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-catalog-content\") pod \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\" (UID: \"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d\") " Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.109279 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-utilities" (OuterVolumeSpecName: "utilities") pod "b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" (UID: "b39fac4d-57ca-4793-8ba9-d6c2f8142e1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.114351 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-kube-api-access-74zg9" (OuterVolumeSpecName: "kube-api-access-74zg9") pod "b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" (UID: "b39fac4d-57ca-4793-8ba9-d6c2f8142e1d"). InnerVolumeSpecName "kube-api-access-74zg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.179886 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" (UID: "b39fac4d-57ca-4793-8ba9-d6c2f8142e1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.212218 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.212266 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74zg9\" (UniqueName: \"kubernetes.io/projected/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-kube-api-access-74zg9\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.212286 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.488706 4983 generic.go:334] "Generic (PLEG): container finished" podID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerID="557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103" exitCode=0 Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.488775 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerDied","Data":"557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103"} Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.488850 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftbv4" event={"ID":"b39fac4d-57ca-4793-8ba9-d6c2f8142e1d","Type":"ContainerDied","Data":"9bd4a702c17d2205b794a46fa00a88a1c7e5657a14faf73b429160617f35c852"} Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.488880 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftbv4" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.488887 4983 scope.go:117] "RemoveContainer" containerID="557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.491926 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea2745ce-1570-4841-8110-1249c0f897e7","Type":"ContainerStarted","Data":"b9ea85136288a8c67b04e65369838f3fa042fcb964af78303b2b9207b0b9ab95"} Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.532771 4983 scope.go:117] "RemoveContainer" containerID="bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.560614 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.5605863700000002 podStartE2EDuration="2.56058637s" podCreationTimestamp="2025-11-25 20:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:50:17.532763664 +0000 UTC m=+1398.645297096" watchObservedRunningTime="2025-11-25 20:50:17.56058637 +0000 UTC m=+1398.673119782" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.569221 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftbv4"] Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.585329 4983 scope.go:117] "RemoveContainer" containerID="d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.592899 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftbv4"] Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.623433 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" path="/var/lib/kubelet/pods/b39fac4d-57ca-4793-8ba9-d6c2f8142e1d/volumes" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.628516 4983 scope.go:117] "RemoveContainer" containerID="557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103" Nov 25 20:50:17 crc kubenswrapper[4983]: E1125 20:50:17.629227 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103\": container with ID starting with 557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103 not found: ID does not exist" containerID="557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.629290 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103"} err="failed to get container status \"557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103\": rpc error: code = NotFound desc = could not find container \"557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103\": container with ID starting with 557e7b430994ec67e608ba96004df5cf9c5f2dbf1e6bdde8e0b49bd5d73b4103 not found: ID does not exist" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.629325 4983 scope.go:117] "RemoveContainer" containerID="bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7" Nov 25 20:50:17 crc kubenswrapper[4983]: E1125 20:50:17.629841 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7\": container with ID starting with bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7 not found: ID does not exist" containerID="bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.629895 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7"} err="failed to get container status \"bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7\": rpc error: code = NotFound desc = could not find container \"bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7\": container with ID starting with bee6453f0ce9b545b8c26c2d1473379f56f2e56696c52668db26483895ad9cc7 not found: ID does not exist" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.629927 4983 scope.go:117] "RemoveContainer" containerID="d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3" Nov 25 20:50:17 crc kubenswrapper[4983]: E1125 20:50:17.630243 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3\": container with ID starting with d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3 not found: ID does not exist" containerID="d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3" Nov 25 20:50:17 crc kubenswrapper[4983]: I1125 20:50:17.630271 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3"} err="failed to get container status \"d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3\": rpc error: code = NotFound desc = could not find container \"d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3\": container with ID starting with d1c3b49d6a0aefaef3317f7d4068492189a7e8700120304b1d1310f002df0aa3 not found: ID does not exist" Nov 25 20:50:18 crc kubenswrapper[4983]: I1125 20:50:18.772940 4983 scope.go:117] "RemoveContainer" containerID="f37a4aa7b754b7c363a33df467750d063cd8b66e6802005e8f322e1b4a817b64" Nov 25 20:50:18 crc kubenswrapper[4983]: I1125 20:50:18.810110 4983 scope.go:117] "RemoveContainer" containerID="5174a126032da749d36001d8aec55c44cdc275096d8acf2fc759afd0a2a5f9de" Nov 25 20:50:18 crc kubenswrapper[4983]: I1125 20:50:18.856437 4983 scope.go:117] "RemoveContainer" containerID="8bf8ef8946116740b6988388de6bbabc4660bbc05b70481110d1c9f1d33b4f7e" Nov 25 20:50:18 crc kubenswrapper[4983]: I1125 20:50:18.883880 4983 scope.go:117] "RemoveContainer" containerID="8f2ad1959c96277a0ef16b1585edeb1c2106f497c9d7f3deb9228f4b31ad7af6" Nov 25 20:50:18 crc kubenswrapper[4983]: I1125 20:50:18.927673 4983 scope.go:117] "RemoveContainer" containerID="3c63ebd7b01f4b9f2503d1923ef9add9edcdb0b530559646256ebb10898a2e83" Nov 25 20:50:18 crc kubenswrapper[4983]: I1125 20:50:18.981635 4983 scope.go:117] "RemoveContainer" containerID="f50346e4e65d94d575c5457244b88459a1177672fd89d5a5b3b3538898b2c7b1" Nov 25 20:50:19 crc kubenswrapper[4983]: I1125 20:50:19.023628 4983 scope.go:117] "RemoveContainer" containerID="14009cf7136ae8752d942d54dc30f5b24e74800affa9dc49231548fa0eca6ca6" Nov 25 20:50:39 crc kubenswrapper[4983]: I1125 20:50:39.927828 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:50:39 crc kubenswrapper[4983]: I1125 20:50:39.928608 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.719041 4983 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.721088 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="registry-server" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.721129 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="registry-server" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.721204 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="extract-content" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.721221 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="extract-content" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.721259 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="extract-utilities" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.721276 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="extract-utilities" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.721746 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39fac4d-57ca-4793-8ba9-d6c2f8142e1d" containerName="registry-server" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.722988 4983 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.723071 4983 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.723237 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.724048 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff" gracePeriod=15 Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.724045 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776" gracePeriod=15 Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.724233 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78" gracePeriod=15 Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.724393 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c" gracePeriod=15 Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.724633 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b" gracePeriod=15 Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.725085 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.725198 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.725285 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.725356 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.725448 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.725522 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.725633 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.725707 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.725794 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.725891 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.726032 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.726125 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.726232 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.726304 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.727831 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.728196 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.728296 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.728380 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.728464 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.728542 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.733746 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.786220 4983 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806251 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806323 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806353 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806381 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806438 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806462 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806632 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.806669 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.888778 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.889609 4983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-state-metrics-0.187b5b1c018c40c8 openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:kube-state-metrics-0,UID:ae259426-d08e-4d8f-b3e7-f06847f1c2da,APIVersion:v1,ResourceVersion:44973,FieldPath:spec.containers{kube-state-metrics},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 20:50:54.888861896 +0000 UTC m=+1436.001395298,LastTimestamp:2025-11-25 20:50:54.888861896 +0000 UTC m=+1436.001395298,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.908848 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.908923 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.908970 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909010 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909118 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909157 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909090 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909087 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909276 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909383 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909469 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909582 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909621 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909636 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: I1125 20:50:54.909728 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:54 crc kubenswrapper[4983]: E1125 20:50:54.989892 4983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-state-metrics-0.187b5b1c018c40c8 openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:kube-state-metrics-0,UID:ae259426-d08e-4d8f-b3e7-f06847f1c2da,APIVersion:v1,ResourceVersion:44973,FieldPath:spec.containers{kube-state-metrics},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 20:50:54.888861896 +0000 UTC m=+1436.001395298,LastTimestamp:2025-11-25 20:50:54.888861896 +0000 UTC m=+1436.001395298,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.088859 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.252110 4983 generic.go:334] "Generic (PLEG): container finished" podID="ea2745ce-1570-4841-8110-1249c0f897e7" containerID="b9ea85136288a8c67b04e65369838f3fa042fcb964af78303b2b9207b0b9ab95" exitCode=0 Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.252230 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea2745ce-1570-4841-8110-1249c0f897e7","Type":"ContainerDied","Data":"b9ea85136288a8c67b04e65369838f3fa042fcb964af78303b2b9207b0b9ab95"} Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.254343 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.255353 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b5a31fc06fdb9ad7895d13ee91ab231dc45d2d0f41e3a95c26c6d2bc9f29b63e"} Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.259252 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.261326 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.262291 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff" exitCode=0 Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.262326 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78" exitCode=0 Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.262343 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b" exitCode=0 Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.262361 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c" exitCode=2 Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.262416 4983 scope.go:117] "RemoveContainer" containerID="63170f96d84ad59a449872c6d8fecd2b57742ea6ded6dec45cd5ba045a4291a9" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.673378 4983 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.673996 4983 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.674808 4983 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.675264 4983 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.675805 4983 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:55 crc kubenswrapper[4983]: I1125 20:50:55.675855 4983 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.676301 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="200ms" Nov 25 20:50:55 crc kubenswrapper[4983]: E1125 20:50:55.882394 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="400ms" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.279790 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"36f7b702466d9ac4d63fec5fbab59de4d51eac151470978fab039ed8b9e8c760"} Nov 25 20:50:56 crc kubenswrapper[4983]: E1125 20:50:56.280647 4983 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.281006 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:56 crc kubenswrapper[4983]: E1125 20:50:56.283640 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="800ms" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.284331 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.691153 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.693049 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.754065 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-var-lock\") pod \"ea2745ce-1570-4841-8110-1249c0f897e7\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.754114 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-kubelet-dir\") pod \"ea2745ce-1570-4841-8110-1249c0f897e7\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.754364 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2745ce-1570-4841-8110-1249c0f897e7-kube-api-access\") pod \"ea2745ce-1570-4841-8110-1249c0f897e7\" (UID: \"ea2745ce-1570-4841-8110-1249c0f897e7\") " Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.754589 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-var-lock" (OuterVolumeSpecName: "var-lock") pod "ea2745ce-1570-4841-8110-1249c0f897e7" (UID: "ea2745ce-1570-4841-8110-1249c0f897e7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.754684 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea2745ce-1570-4841-8110-1249c0f897e7" (UID: "ea2745ce-1570-4841-8110-1249c0f897e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.755430 4983 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.755458 4983 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea2745ce-1570-4841-8110-1249c0f897e7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.819742 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2745ce-1570-4841-8110-1249c0f897e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea2745ce-1570-4841-8110-1249c0f897e7" (UID: "ea2745ce-1570-4841-8110-1249c0f897e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:50:56 crc kubenswrapper[4983]: I1125 20:50:56.859049 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea2745ce-1570-4841-8110-1249c0f897e7-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:57 crc kubenswrapper[4983]: E1125 20:50:57.084934 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="1.6s" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.307898 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea2745ce-1570-4841-8110-1249c0f897e7","Type":"ContainerDied","Data":"e38bda5c8d45639c91ce3f2cfc4255e23b2f1bcbf9d91538ef2277b60fe3068d"} Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.307939 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.307967 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e38bda5c8d45639c91ce3f2cfc4255e23b2f1bcbf9d91538ef2277b60fe3068d" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.311913 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.313037 4983 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776" exitCode=0 Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.313115 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcb012689f0b6ef1e16ca18c87abd239866b358ddc07672bb690aade899c646a" Nov 25 20:50:57 crc kubenswrapper[4983]: E1125 20:50:57.314187 4983 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.382146 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.387300 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.388669 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.390665 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.391326 4983 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.472749 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.472975 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.473408 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.473521 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.473856 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.474061 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.474824 4983 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.474864 4983 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.474884 4983 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:50:57 crc kubenswrapper[4983]: I1125 20:50:57.620427 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 20:50:58 crc kubenswrapper[4983]: I1125 20:50:58.324003 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:50:58 crc kubenswrapper[4983]: I1125 20:50:58.325061 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:58 crc kubenswrapper[4983]: I1125 20:50:58.325606 4983 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:58 crc kubenswrapper[4983]: I1125 20:50:58.330044 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:58 crc kubenswrapper[4983]: I1125 20:50:58.330407 4983 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:58 crc kubenswrapper[4983]: E1125 20:50:58.691702 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="3.2s" Nov 25 20:50:59 crc kubenswrapper[4983]: I1125 20:50:59.617189 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:50:59 crc kubenswrapper[4983]: I1125 20:50:59.618855 4983 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:01 crc kubenswrapper[4983]: E1125 20:51:01.893831 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="6.4s" Nov 25 20:51:04 crc kubenswrapper[4983]: I1125 20:51:04.888748 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 20:51:04 crc kubenswrapper[4983]: E1125 20:51:04.991423 4983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": dial tcp 38.102.83.173:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-state-metrics-0.187b5b1c018c40c8 openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:kube-state-metrics-0,UID:ae259426-d08e-4d8f-b3e7-f06847f1c2da,APIVersion:v1,ResourceVersion:44973,FieldPath:spec.containers{kube-state-metrics},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 20:50:54.888861896 +0000 UTC m=+1436.001395298,LastTimestamp:2025-11-25 20:50:54.888861896 +0000 UTC m=+1436.001395298,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.206227 4983 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8080/readyz\": dial tcp 10.217.0.47:8080: connect: connection refused" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.413004 4983 generic.go:334] "Generic (PLEG): container finished" podID="e1668e7f-55bb-415c-b378-1c70483b30a6" containerID="008b13266f643deb81f8b41a3984d80ef9128e634260e9b2080a6431dd4580c1" exitCode=1 Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.413084 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerDied","Data":"008b13266f643deb81f8b41a3984d80ef9128e634260e9b2080a6431dd4580c1"} Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.413859 4983 scope.go:117] "RemoveContainer" containerID="008b13266f643deb81f8b41a3984d80ef9128e634260e9b2080a6431dd4580c1" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.414568 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.414959 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.417297 4983 generic.go:334] "Generic (PLEG): container finished" podID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" containerID="f712243812834ca6069eb692c71375f383526ac41298ce33047d603b57260e3a" exitCode=1 Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.417333 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" event={"ID":"74baeb7c-21f0-4d1c-9a61-7694f59cc161","Type":"ContainerDied","Data":"f712243812834ca6069eb692c71375f383526ac41298ce33047d603b57260e3a"} Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.417945 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.418197 4983 scope.go:117] "RemoveContainer" containerID="f712243812834ca6069eb692c71375f383526ac41298ce33047d603b57260e3a" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.418251 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:05 crc kubenswrapper[4983]: I1125 20:51:05.418544 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.432122 4983 generic.go:334] "Generic (PLEG): container finished" podID="e1668e7f-55bb-415c-b378-1c70483b30a6" containerID="bc7c81b4cbadb4eb3ea3bba28a616a384d0d45635730442b3cae70467dfecbb9" exitCode=1 Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.432198 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerDied","Data":"bc7c81b4cbadb4eb3ea3bba28a616a384d0d45635730442b3cae70467dfecbb9"} Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.432690 4983 scope.go:117] "RemoveContainer" containerID="008b13266f643deb81f8b41a3984d80ef9128e634260e9b2080a6431dd4580c1" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.433389 4983 scope.go:117] "RemoveContainer" containerID="bc7c81b4cbadb4eb3ea3bba28a616a384d0d45635730442b3cae70467dfecbb9" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.433612 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: E1125 20:51:06.433755 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-67cb4dc6d4-9zpxb_openstack-operators(e1668e7f-55bb-415c-b378-1c70483b30a6)\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.433929 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.434471 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.437714 4983 generic.go:334] "Generic (PLEG): container finished" podID="a096f840-35b3-48c1-8c0e-762b67b8bde0" containerID="faf022606aab866ff05a368881d0a696a1aaed4c2a6196b2d87a2b326d9574df" exitCode=1 Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.437808 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerDied","Data":"faf022606aab866ff05a368881d0a696a1aaed4c2a6196b2d87a2b326d9574df"} Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.439792 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.439864 4983 scope.go:117] "RemoveContainer" containerID="faf022606aab866ff05a368881d0a696a1aaed4c2a6196b2d87a2b326d9574df" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.440218 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.440715 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.440748 4983 generic.go:334] "Generic (PLEG): container finished" podID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" containerID="480172d063f01071881eb46657e72676ebccdded430b4849f96406415a536761" exitCode=1 Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.440883 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" event={"ID":"74baeb7c-21f0-4d1c-9a61-7694f59cc161","Type":"ContainerDied","Data":"480172d063f01071881eb46657e72676ebccdded430b4849f96406415a536761"} Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.441151 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.441715 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.441916 4983 scope.go:117] "RemoveContainer" containerID="480172d063f01071881eb46657e72676ebccdded430b4849f96406415a536761" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.442055 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: E1125 20:51:06.442357 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6dcc87d69d-p8fwj_metallb-system(74baeb7c-21f0-4d1c-9a61-7694f59cc161)\"" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.442462 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.442766 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:06 crc kubenswrapper[4983]: I1125 20:51:06.521187 4983 scope.go:117] "RemoveContainer" containerID="f712243812834ca6069eb692c71375f383526ac41298ce33047d603b57260e3a" Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.456148 4983 generic.go:334] "Generic (PLEG): container finished" podID="a096f840-35b3-48c1-8c0e-762b67b8bde0" containerID="1630196c9d10cb5193e4f92c7dee14b3ff3b2cb8bf68b24ed51bd9d02e166dc5" exitCode=1 Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.456283 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerDied","Data":"1630196c9d10cb5193e4f92c7dee14b3ff3b2cb8bf68b24ed51bd9d02e166dc5"} Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.456793 4983 scope.go:117] "RemoveContainer" containerID="faf022606aab866ff05a368881d0a696a1aaed4c2a6196b2d87a2b326d9574df" Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.457855 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.458196 4983 scope.go:117] "RemoveContainer" containerID="1630196c9d10cb5193e4f92c7dee14b3ff3b2cb8bf68b24ed51bd9d02e166dc5" Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.460663 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:07 crc kubenswrapper[4983]: E1125 20:51:07.461328 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-64cdc6ff96-p8q9g_openstack-operators(a096f840-35b3-48c1-8c0e-762b67b8bde0)\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.462087 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:07 crc kubenswrapper[4983]: I1125 20:51:07.462892 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:07 crc kubenswrapper[4983]: E1125 20:51:07.649986 4983 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-0" volumeName="ovndbcluster-nb-etc-ovn" Nov 25 20:51:08 crc kubenswrapper[4983]: E1125 20:51:08.297030 4983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.173:6443: connect: connection refused" interval="7s" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.486239 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.486320 4983 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43" exitCode=1 Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.486431 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43"} Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.487423 4983 scope.go:117] "RemoveContainer" containerID="10ac3c7e2b8060a947e062ac279ebcd2a5054406dbd0b6a959289e080ce8ea43" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.487928 4983 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.488473 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.489077 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.489659 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.490220 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.604992 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.606789 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.607749 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.608413 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.609181 4983 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.613072 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.630036 4983 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.630109 4983 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:08 crc kubenswrapper[4983]: E1125 20:51:08.631044 4983 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:08 crc kubenswrapper[4983]: I1125 20:51:08.632211 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:08 crc kubenswrapper[4983]: W1125 20:51:08.688005 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1d49fb2b6672998e8e49f8ab1cbe9d0565ff1be7966106f3442ecab0c3c432e6 WatchSource:0}: Error finding container 1d49fb2b6672998e8e49f8ab1cbe9d0565ff1be7966106f3442ecab0c3c432e6: Status 404 returned error can't find the container with id 1d49fb2b6672998e8e49f8ab1cbe9d0565ff1be7966106f3442ecab0c3c432e6 Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.504704 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.505130 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3b7ac2107186d03e30f3f1a53e80f4a1339afe32a9663be7464fb95cf298871a"} Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.506040 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.506289 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.506541 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.506897 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.507327 4983 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1a2a47352e5338f9c7025ad2b3f33116a827a3640ef1fb6c043662bfcaced74a" exitCode=0 Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.507364 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1a2a47352e5338f9c7025ad2b3f33116a827a3640ef1fb6c043662bfcaced74a"} Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.507389 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1d49fb2b6672998e8e49f8ab1cbe9d0565ff1be7966106f3442ecab0c3c432e6"} Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.507466 4983 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.507647 4983 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.507662 4983 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:09 crc kubenswrapper[4983]: E1125 20:51:09.507850 4983 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.508342 4983 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.508813 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.509304 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.509773 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.510249 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.632114 4983 status_manager.go:851] "Failed to get status for pod" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.633081 4983 status_manager.go:851] "Failed to get status for pod" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-67cb4dc6d4-9zpxb\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.633695 4983 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.634155 4983 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.634820 4983 status_manager.go:851] "Failed to get status for pod" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-64cdc6ff96-p8q9g\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.635232 4983 status_manager.go:851] "Failed to get status for pod" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-6dcc87d69d-p8fwj\": dial tcp 38.102.83.173:6443: connect: connection refused" Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.929955 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:51:09 crc kubenswrapper[4983]: I1125 20:51:09.930454 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:51:10 crc kubenswrapper[4983]: I1125 20:51:10.130602 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:51:10 crc kubenswrapper[4983]: I1125 20:51:10.136413 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:51:10 crc kubenswrapper[4983]: I1125 20:51:10.538048 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e7a9e351abece8d7761cbf0358930370a29cd1cec049940534fb140432abae25"} Nov 25 20:51:10 crc kubenswrapper[4983]: I1125 20:51:10.538145 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1846d039cd0c25c2b0f4c1fa6841b8d1faa9db2ee8e1a67fb5c72e52b018b18"} Nov 25 20:51:10 crc kubenswrapper[4983]: I1125 20:51:10.538183 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.552600 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1400b286968442733772e1ad5cead71197f97021ed5d0e9215164d86b69f6a10"} Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.553186 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef2ad50aded8fe5a97f55f70b7435c4c5433b067661447b7c134e794d998dd5c"} Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.553217 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2ce94175cd0ffb7e55b8447a1f2b82d872c7ef8dd7ee0cf6bbeefed45185ac8c"} Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.552963 4983 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.553262 4983 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.930542 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:51:11 crc kubenswrapper[4983]: I1125 20:51:11.931826 4983 scope.go:117] "RemoveContainer" containerID="bc7c81b4cbadb4eb3ea3bba28a616a384d0d45635730442b3cae70467dfecbb9" Nov 25 20:51:11 crc kubenswrapper[4983]: E1125 20:51:11.932085 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-67cb4dc6d4-9zpxb_openstack-operators(e1668e7f-55bb-415c-b378-1c70483b30a6)\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" Nov 25 20:51:12 crc kubenswrapper[4983]: I1125 20:51:12.163625 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:51:12 crc kubenswrapper[4983]: I1125 20:51:12.164403 4983 scope.go:117] "RemoveContainer" containerID="1630196c9d10cb5193e4f92c7dee14b3ff3b2cb8bf68b24ed51bd9d02e166dc5" Nov 25 20:51:12 crc kubenswrapper[4983]: E1125 20:51:12.164644 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-64cdc6ff96-p8q9g_openstack-operators(a096f840-35b3-48c1-8c0e-762b67b8bde0)\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" Nov 25 20:51:13 crc kubenswrapper[4983]: I1125 20:51:13.642327 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:13 crc kubenswrapper[4983]: I1125 20:51:13.655749 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:13 crc kubenswrapper[4983]: I1125 20:51:13.657064 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:14 crc kubenswrapper[4983]: I1125 20:51:14.892026 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 20:51:14 crc kubenswrapper[4983]: I1125 20:51:14.892431 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 20:51:14 crc kubenswrapper[4983]: I1125 20:51:14.893513 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"b3988a1ec56ae493d08f23b66f3a2feb37029e37c15d737df8d6277f5f09804d"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 20:51:14 crc kubenswrapper[4983]: I1125 20:51:14.893589 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerName="kube-state-metrics" containerID="cri-o://b3988a1ec56ae493d08f23b66f3a2feb37029e37c15d737df8d6277f5f09804d" gracePeriod=30 Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.206312 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.207201 4983 scope.go:117] "RemoveContainer" containerID="480172d063f01071881eb46657e72676ebccdded430b4849f96406415a536761" Nov 25 20:51:15 crc kubenswrapper[4983]: E1125 20:51:15.207631 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-6dcc87d69d-p8fwj_metallb-system(74baeb7c-21f0-4d1c-9a61-7694f59cc161)\"" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.630039 4983 generic.go:334] "Generic (PLEG): container finished" podID="d7302bdd-d74f-4d95-a354-42fcd52bf22e" containerID="3925847c4c4b73204357af3b98c257a3a464f1b2a6414365a8a0055ec0eb5c11" exitCode=1 Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.630178 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" event={"ID":"d7302bdd-d74f-4d95-a354-42fcd52bf22e","Type":"ContainerDied","Data":"3925847c4c4b73204357af3b98c257a3a464f1b2a6414365a8a0055ec0eb5c11"} Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.631514 4983 scope.go:117] "RemoveContainer" containerID="3925847c4c4b73204357af3b98c257a3a464f1b2a6414365a8a0055ec0eb5c11" Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.632376 4983 generic.go:334] "Generic (PLEG): container finished" podID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerID="b3988a1ec56ae493d08f23b66f3a2feb37029e37c15d737df8d6277f5f09804d" exitCode=2 Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.632463 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae259426-d08e-4d8f-b3e7-f06847f1c2da","Type":"ContainerDied","Data":"b3988a1ec56ae493d08f23b66f3a2feb37029e37c15d737df8d6277f5f09804d"} Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.635482 4983 generic.go:334] "Generic (PLEG): container finished" podID="afff7723-36e3-42ae-9fac-9f8fdb86d839" containerID="9592feffbfcdb7be9b3e19c3a4a5ddee1f33b87b1d6917be2e5b8103c0b057e1" exitCode=1 Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.635600 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" event={"ID":"afff7723-36e3-42ae-9fac-9f8fdb86d839","Type":"ContainerDied","Data":"9592feffbfcdb7be9b3e19c3a4a5ddee1f33b87b1d6917be2e5b8103c0b057e1"} Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.636056 4983 scope.go:117] "RemoveContainer" containerID="9592feffbfcdb7be9b3e19c3a4a5ddee1f33b87b1d6917be2e5b8103c0b057e1" Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.638650 4983 generic.go:334] "Generic (PLEG): container finished" podID="badb10c7-4c8c-42c4-b481-221377fa7255" containerID="b325d87e8c2ba3da5523a71d4dfe14afb4b9bb4ca62a42e706a8dbb1ef803d0b" exitCode=1 Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.638723 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" event={"ID":"badb10c7-4c8c-42c4-b481-221377fa7255","Type":"ContainerDied","Data":"b325d87e8c2ba3da5523a71d4dfe14afb4b9bb4ca62a42e706a8dbb1ef803d0b"} Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.639378 4983 scope.go:117] "RemoveContainer" containerID="b325d87e8c2ba3da5523a71d4dfe14afb4b9bb4ca62a42e706a8dbb1ef803d0b" Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.641093 4983 generic.go:334] "Generic (PLEG): container finished" podID="64141c1d-799a-4d72-aa99-e54975052879" containerID="43db46b0b3b02b2d4e825e80e3b8ab609c79d422bd430339cf81f4ce44095b64" exitCode=1 Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.641140 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" event={"ID":"64141c1d-799a-4d72-aa99-e54975052879","Type":"ContainerDied","Data":"43db46b0b3b02b2d4e825e80e3b8ab609c79d422bd430339cf81f4ce44095b64"} Nov 25 20:51:15 crc kubenswrapper[4983]: I1125 20:51:15.642074 4983 scope.go:117] "RemoveContainer" containerID="43db46b0b3b02b2d4e825e80e3b8ab609c79d422bd430339cf81f4ce44095b64" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.656526 4983 generic.go:334] "Generic (PLEG): container finished" podID="0d3d657c-e179-43c7-abca-c37f8396d1cd" containerID="d1a8e350a61ce7e9dc9dbf72ea6c3efcf9e48faf3b4f1af458dc6c5aa273ecdf" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.656713 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" event={"ID":"0d3d657c-e179-43c7-abca-c37f8396d1cd","Type":"ContainerDied","Data":"d1a8e350a61ce7e9dc9dbf72ea6c3efcf9e48faf3b4f1af458dc6c5aa273ecdf"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.658224 4983 scope.go:117] "RemoveContainer" containerID="d1a8e350a61ce7e9dc9dbf72ea6c3efcf9e48faf3b4f1af458dc6c5aa273ecdf" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.667600 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" event={"ID":"72f1d28e-26ff-43d3-bd93-54c21d9cdd70","Type":"ContainerDied","Data":"2689d7a466b399d9262e61b5e10344c8ac51ee7582650191db77c17c16878761"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.668525 4983 scope.go:117] "RemoveContainer" containerID="2689d7a466b399d9262e61b5e10344c8ac51ee7582650191db77c17c16878761" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.671787 4983 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.672727 4983 generic.go:334] "Generic (PLEG): container finished" podID="72f1d28e-26ff-43d3-bd93-54c21d9cdd70" containerID="2689d7a466b399d9262e61b5e10344c8ac51ee7582650191db77c17c16878761" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.685521 4983 generic.go:334] "Generic (PLEG): container finished" podID="afff7723-36e3-42ae-9fac-9f8fdb86d839" containerID="a5aec0481055d47f0b8e60ee7cc18b2065fede12afd801d4cfc4ff25be19edcb" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.685677 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" event={"ID":"afff7723-36e3-42ae-9fac-9f8fdb86d839","Type":"ContainerDied","Data":"a5aec0481055d47f0b8e60ee7cc18b2065fede12afd801d4cfc4ff25be19edcb"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.685719 4983 scope.go:117] "RemoveContainer" containerID="9592feffbfcdb7be9b3e19c3a4a5ddee1f33b87b1d6917be2e5b8103c0b057e1" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.686568 4983 scope.go:117] "RemoveContainer" containerID="a5aec0481055d47f0b8e60ee7cc18b2065fede12afd801d4cfc4ff25be19edcb" Nov 25 20:51:16 crc kubenswrapper[4983]: E1125 20:51:16.686809 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_openstack-operators(afff7723-36e3-42ae-9fac-9f8fdb86d839)\"" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" podUID="afff7723-36e3-42ae-9fac-9f8fdb86d839" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.698017 4983 generic.go:334] "Generic (PLEG): container finished" podID="4743af06-44e2-438a-82b7-bf32b0f5ca03" containerID="85db8e24ee4320e5d133fc2ba6087f768cb0cd5c42aed98b4791d4dc9ad25df5" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.698109 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" event={"ID":"4743af06-44e2-438a-82b7-bf32b0f5ca03","Type":"ContainerDied","Data":"85db8e24ee4320e5d133fc2ba6087f768cb0cd5c42aed98b4791d4dc9ad25df5"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.698526 4983 scope.go:117] "RemoveContainer" containerID="85db8e24ee4320e5d133fc2ba6087f768cb0cd5c42aed98b4791d4dc9ad25df5" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.703223 4983 generic.go:334] "Generic (PLEG): container finished" podID="badb10c7-4c8c-42c4-b481-221377fa7255" containerID="629fb45e6dd5aed03ad6f8e0d59e5a710a0a4915571be20834e904f1a9919661" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.703278 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" event={"ID":"badb10c7-4c8c-42c4-b481-221377fa7255","Type":"ContainerDied","Data":"629fb45e6dd5aed03ad6f8e0d59e5a710a0a4915571be20834e904f1a9919661"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.704258 4983 scope.go:117] "RemoveContainer" containerID="629fb45e6dd5aed03ad6f8e0d59e5a710a0a4915571be20834e904f1a9919661" Nov 25 20:51:16 crc kubenswrapper[4983]: E1125 20:51:16.704626 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-6fdcddb789-ljpb8_openstack-operators(badb10c7-4c8c-42c4-b481-221377fa7255)\"" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" podUID="badb10c7-4c8c-42c4-b481-221377fa7255" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.706394 4983 generic.go:334] "Generic (PLEG): container finished" podID="5b14316c-9639-4934-a5e9-5381d2797ef5" containerID="ce3bb525bce7355f782d0164aa0dcec2c15378d6b3aaffc8bbf1521842c8c9ae" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.706501 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" event={"ID":"5b14316c-9639-4934-a5e9-5381d2797ef5","Type":"ContainerDied","Data":"ce3bb525bce7355f782d0164aa0dcec2c15378d6b3aaffc8bbf1521842c8c9ae"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.706851 4983 scope.go:117] "RemoveContainer" containerID="ce3bb525bce7355f782d0164aa0dcec2c15378d6b3aaffc8bbf1521842c8c9ae" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.724800 4983 generic.go:334] "Generic (PLEG): container finished" podID="64141c1d-799a-4d72-aa99-e54975052879" containerID="37df195dbd11cf2f9caa964fb201a4722d0743043b29a5a01cdeedfa46bad6be" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.725017 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" event={"ID":"64141c1d-799a-4d72-aa99-e54975052879","Type":"ContainerDied","Data":"37df195dbd11cf2f9caa964fb201a4722d0743043b29a5a01cdeedfa46bad6be"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.725946 4983 scope.go:117] "RemoveContainer" containerID="37df195dbd11cf2f9caa964fb201a4722d0743043b29a5a01cdeedfa46bad6be" Nov 25 20:51:16 crc kubenswrapper[4983]: E1125 20:51:16.726262 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-57988cc5b5-mhjtj_openstack-operators(64141c1d-799a-4d72-aa99-e54975052879)\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" podUID="64141c1d-799a-4d72-aa99-e54975052879" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.787376 4983 generic.go:334] "Generic (PLEG): container finished" podID="d7302bdd-d74f-4d95-a354-42fcd52bf22e" containerID="d3d45ee613969be8527633716da9d5b63f317a00ba1f42f38370d5db78bf0479" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.787437 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" event={"ID":"d7302bdd-d74f-4d95-a354-42fcd52bf22e","Type":"ContainerDied","Data":"d3d45ee613969be8527633716da9d5b63f317a00ba1f42f38370d5db78bf0479"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.788191 4983 scope.go:117] "RemoveContainer" containerID="d3d45ee613969be8527633716da9d5b63f317a00ba1f42f38370d5db78bf0479" Nov 25 20:51:16 crc kubenswrapper[4983]: E1125 20:51:16.788426 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-56897c768d-zc5rq_openstack-operators(d7302bdd-d74f-4d95-a354-42fcd52bf22e)\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" podUID="d7302bdd-d74f-4d95-a354-42fcd52bf22e" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.801665 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.815006 4983 generic.go:334] "Generic (PLEG): container finished" podID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" containerID="5ef7631387b47665b07c4873cfee1b9d2f606c285a9e4c6bafb3305ca8cfe8c6" exitCode=1 Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.815058 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" event={"ID":"1e439ca1-98f3-4650-96da-1e4c1b2da37e","Type":"ContainerDied","Data":"5ef7631387b47665b07c4873cfee1b9d2f606c285a9e4c6bafb3305ca8cfe8c6"} Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.815813 4983 scope.go:117] "RemoveContainer" containerID="5ef7631387b47665b07c4873cfee1b9d2f606c285a9e4c6bafb3305ca8cfe8c6" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.871176 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d5855e40-3baa-4a2c-8a96-1f6ee9fd4ccc" Nov 25 20:51:16 crc kubenswrapper[4983]: I1125 20:51:16.922187 4983 scope.go:117] "RemoveContainer" containerID="b325d87e8c2ba3da5523a71d4dfe14afb4b9bb4ca62a42e706a8dbb1ef803d0b" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.145281 4983 scope.go:117] "RemoveContainer" containerID="43db46b0b3b02b2d4e825e80e3b8ab609c79d422bd430339cf81f4ce44095b64" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.257993 4983 scope.go:117] "RemoveContainer" containerID="3925847c4c4b73204357af3b98c257a3a464f1b2a6414365a8a0055ec0eb5c11" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.494625 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.494690 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.782772 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.784476 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.831909 4983 generic.go:334] "Generic (PLEG): container finished" podID="0d3d657c-e179-43c7-abca-c37f8396d1cd" containerID="74aa735cc137160b6b2e9aa3fb62381a29dfcb859e028601080e7caef696e191" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.831952 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" event={"ID":"0d3d657c-e179-43c7-abca-c37f8396d1cd","Type":"ContainerDied","Data":"74aa735cc137160b6b2e9aa3fb62381a29dfcb859e028601080e7caef696e191"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.832026 4983 scope.go:117] "RemoveContainer" containerID="d1a8e350a61ce7e9dc9dbf72ea6c3efcf9e48faf3b4f1af458dc6c5aa273ecdf" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.832783 4983 scope.go:117] "RemoveContainer" containerID="74aa735cc137160b6b2e9aa3fb62381a29dfcb859e028601080e7caef696e191" Nov 25 20:51:17 crc kubenswrapper[4983]: E1125 20:51:17.833103 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-57548d458d-qlm9k_openstack-operators(0d3d657c-e179-43c7-abca-c37f8396d1cd)\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" podUID="0d3d657c-e179-43c7-abca-c37f8396d1cd" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.855216 4983 generic.go:334] "Generic (PLEG): container finished" podID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerID="11110e9e9326767ad182eb5a397b307dc74ae75d68627fb79732c3aa4224a8a9" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.855319 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae259426-d08e-4d8f-b3e7-f06847f1c2da","Type":"ContainerDied","Data":"11110e9e9326767ad182eb5a397b307dc74ae75d68627fb79732c3aa4224a8a9"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.856141 4983 scope.go:117] "RemoveContainer" containerID="11110e9e9326767ad182eb5a397b307dc74ae75d68627fb79732c3aa4224a8a9" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.871191 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" event={"ID":"4743af06-44e2-438a-82b7-bf32b0f5ca03","Type":"ContainerStarted","Data":"b2e6dd93e43bba3fda57e7ab9b2e3aea0a5d96dff76d9c59e58943993852a747"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.871377 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.874095 4983 generic.go:334] "Generic (PLEG): container finished" podID="ff284fea-7792-40e1-8ede-f52412a6c014" containerID="e566187b30e83938355b3a1c52c572bdd1b1cfc3f9b6770bdb9fd69f8e5862bf" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.874172 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" event={"ID":"ff284fea-7792-40e1-8ede-f52412a6c014","Type":"ContainerDied","Data":"e566187b30e83938355b3a1c52c572bdd1b1cfc3f9b6770bdb9fd69f8e5862bf"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.875396 4983 scope.go:117] "RemoveContainer" containerID="e566187b30e83938355b3a1c52c572bdd1b1cfc3f9b6770bdb9fd69f8e5862bf" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.881512 4983 generic.go:334] "Generic (PLEG): container finished" podID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" containerID="2e76d837019e461a6647697fea0d99d388a7fa5c0c00db18a6f478a0dba141e4" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.881638 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" event={"ID":"1e439ca1-98f3-4650-96da-1e4c1b2da37e","Type":"ContainerDied","Data":"2e76d837019e461a6647697fea0d99d388a7fa5c0c00db18a6f478a0dba141e4"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.888529 4983 scope.go:117] "RemoveContainer" containerID="2e76d837019e461a6647697fea0d99d388a7fa5c0c00db18a6f478a0dba141e4" Nov 25 20:51:17 crc kubenswrapper[4983]: E1125 20:51:17.889428 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-656dcb59d4-rpfhz_openstack-operators(1e439ca1-98f3-4650-96da-1e4c1b2da37e)\"" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" podUID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.898261 4983 generic.go:334] "Generic (PLEG): container finished" podID="5b14316c-9639-4934-a5e9-5381d2797ef5" containerID="1e4ca543609b941cf9928a57e8833b9545f0d8d1a38f1f98efde51f9c9ca48e8" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.898395 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" event={"ID":"5b14316c-9639-4934-a5e9-5381d2797ef5","Type":"ContainerDied","Data":"1e4ca543609b941cf9928a57e8833b9545f0d8d1a38f1f98efde51f9c9ca48e8"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.899476 4983 scope.go:117] "RemoveContainer" containerID="1e4ca543609b941cf9928a57e8833b9545f0d8d1a38f1f98efde51f9c9ca48e8" Nov 25 20:51:17 crc kubenswrapper[4983]: E1125 20:51:17.899912 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-d77b94747-4c95t_openstack-operators(5b14316c-9639-4934-a5e9-5381d2797ef5)\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podUID="5b14316c-9639-4934-a5e9-5381d2797ef5" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.906057 4983 generic.go:334] "Generic (PLEG): container finished" podID="72f1d28e-26ff-43d3-bd93-54c21d9cdd70" containerID="e01722029263149f6171ba8721e1071527af2b8fb4e91e0b577222b363352dc7" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.906172 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" event={"ID":"72f1d28e-26ff-43d3-bd93-54c21d9cdd70","Type":"ContainerDied","Data":"e01722029263149f6171ba8721e1071527af2b8fb4e91e0b577222b363352dc7"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.907384 4983 scope.go:117] "RemoveContainer" containerID="e01722029263149f6171ba8721e1071527af2b8fb4e91e0b577222b363352dc7" Nov 25 20:51:17 crc kubenswrapper[4983]: E1125 20:51:17.907868 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-5d494799bf-cctnq_openstack-operators(72f1d28e-26ff-43d3-bd93-54c21d9cdd70)\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" podUID="72f1d28e-26ff-43d3-bd93-54c21d9cdd70" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.913998 4983 generic.go:334] "Generic (PLEG): container finished" podID="f32095da-1fdc-4d52-b082-98b39652cdc6" containerID="75e7c1267f1210f5dbccd40298f1b4a84918450298e182c51176721003d9c049" exitCode=1 Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.914365 4983 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.914380 4983 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.914622 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" event={"ID":"f32095da-1fdc-4d52-b082-98b39652cdc6","Type":"ContainerDied","Data":"75e7c1267f1210f5dbccd40298f1b4a84918450298e182c51176721003d9c049"} Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.915087 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.916092 4983 scope.go:117] "RemoveContainer" containerID="75e7c1267f1210f5dbccd40298f1b4a84918450298e182c51176721003d9c049" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.922125 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.930934 4983 scope.go:117] "RemoveContainer" containerID="b3988a1ec56ae493d08f23b66f3a2feb37029e37c15d737df8d6277f5f09804d" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.976141 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d5855e40-3baa-4a2c-8a96-1f6ee9fd4ccc" Nov 25 20:51:17 crc kubenswrapper[4983]: I1125 20:51:17.982342 4983 scope.go:117] "RemoveContainer" containerID="5ef7631387b47665b07c4873cfee1b9d2f606c285a9e4c6bafb3305ca8cfe8c6" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.088397 4983 scope.go:117] "RemoveContainer" containerID="ce3bb525bce7355f782d0164aa0dcec2c15378d6b3aaffc8bbf1521842c8c9ae" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.104649 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.104710 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.226212 4983 scope.go:117] "RemoveContainer" containerID="2689d7a466b399d9262e61b5e10344c8ac51ee7582650191db77c17c16878761" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.934810 4983 generic.go:334] "Generic (PLEG): container finished" podID="ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042" containerID="054daea6e75eb5e2bdebf3c62e6aeb967c39fff4d427308f9fa784e913be78c0" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.935224 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" event={"ID":"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042","Type":"ContainerDied","Data":"054daea6e75eb5e2bdebf3c62e6aeb967c39fff4d427308f9fa784e913be78c0"} Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.936297 4983 scope.go:117] "RemoveContainer" containerID="054daea6e75eb5e2bdebf3c62e6aeb967c39fff4d427308f9fa784e913be78c0" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.941300 4983 generic.go:334] "Generic (PLEG): container finished" podID="00a7db78-81a7-481d-a20e-135c60e139e3" containerID="c3fb31f1b338ebf2c6933c9cd12a13b813bde606aa2d327b8486df26a9a1159e" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.941401 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" event={"ID":"00a7db78-81a7-481d-a20e-135c60e139e3","Type":"ContainerDied","Data":"c3fb31f1b338ebf2c6933c9cd12a13b813bde606aa2d327b8486df26a9a1159e"} Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.942741 4983 scope.go:117] "RemoveContainer" containerID="c3fb31f1b338ebf2c6933c9cd12a13b813bde606aa2d327b8486df26a9a1159e" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.947617 4983 generic.go:334] "Generic (PLEG): container finished" podID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" containerID="49508c5f00ea8af8db407e6c8b401bb4480e7e5022f7a7bcac3493405b04fc7d" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.947728 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae259426-d08e-4d8f-b3e7-f06847f1c2da","Type":"ContainerDied","Data":"49508c5f00ea8af8db407e6c8b401bb4480e7e5022f7a7bcac3493405b04fc7d"} Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.948883 4983 scope.go:117] "RemoveContainer" containerID="49508c5f00ea8af8db407e6c8b401bb4480e7e5022f7a7bcac3493405b04fc7d" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.949489 4983 scope.go:117] "RemoveContainer" containerID="11110e9e9326767ad182eb5a397b307dc74ae75d68627fb79732c3aa4224a8a9" Nov 25 20:51:18 crc kubenswrapper[4983]: E1125 20:51:18.951100 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(ae259426-d08e-4d8f-b3e7-f06847f1c2da)\"" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.955809 4983 generic.go:334] "Generic (PLEG): container finished" podID="ff284fea-7792-40e1-8ede-f52412a6c014" containerID="2cf4876ee036e56235ae081269cb2aaad35bfdc9337d5bde808bfa444a977873" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.955978 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" event={"ID":"ff284fea-7792-40e1-8ede-f52412a6c014","Type":"ContainerDied","Data":"2cf4876ee036e56235ae081269cb2aaad35bfdc9337d5bde808bfa444a977873"} Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.957942 4983 scope.go:117] "RemoveContainer" containerID="2cf4876ee036e56235ae081269cb2aaad35bfdc9337d5bde808bfa444a977873" Nov 25 20:51:18 crc kubenswrapper[4983]: E1125 20:51:18.958523 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-bwf7d_openstack-operators(ff284fea-7792-40e1-8ede-f52412a6c014)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" podUID="ff284fea-7792-40e1-8ede-f52412a6c014" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.964479 4983 generic.go:334] "Generic (PLEG): container finished" podID="9d7c78e4-4890-4527-9db4-131842750615" containerID="a51cd56a7a6390cdbb05b64926ccaaf27335f93ef9186bb6b69d703b3cdead49" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.964545 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" event={"ID":"9d7c78e4-4890-4527-9db4-131842750615","Type":"ContainerDied","Data":"a51cd56a7a6390cdbb05b64926ccaaf27335f93ef9186bb6b69d703b3cdead49"} Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.965413 4983 scope.go:117] "RemoveContainer" containerID="a51cd56a7a6390cdbb05b64926ccaaf27335f93ef9186bb6b69d703b3cdead49" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.969499 4983 generic.go:334] "Generic (PLEG): container finished" podID="668ad5ef-ec7f-4239-94c5-8bb868f653ce" containerID="03f2d6cb3de1e454a3267d4f5a089b8e764520b2333c623bbfd84cac9ff88394" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.969565 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" event={"ID":"668ad5ef-ec7f-4239-94c5-8bb868f653ce","Type":"ContainerDied","Data":"03f2d6cb3de1e454a3267d4f5a089b8e764520b2333c623bbfd84cac9ff88394"} Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.969844 4983 scope.go:117] "RemoveContainer" containerID="03f2d6cb3de1e454a3267d4f5a089b8e764520b2333c623bbfd84cac9ff88394" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.972503 4983 scope.go:117] "RemoveContainer" containerID="74aa735cc137160b6b2e9aa3fb62381a29dfcb859e028601080e7caef696e191" Nov 25 20:51:18 crc kubenswrapper[4983]: E1125 20:51:18.972716 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-57548d458d-qlm9k_openstack-operators(0d3d657c-e179-43c7-abca-c37f8396d1cd)\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" podUID="0d3d657c-e179-43c7-abca-c37f8396d1cd" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.986042 4983 generic.go:334] "Generic (PLEG): container finished" podID="f32095da-1fdc-4d52-b082-98b39652cdc6" containerID="2094fd3153577c010182a75fe0f6a1565cc331ae525fd1287db5df2a1c4ad611" exitCode=1 Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.986646 4983 scope.go:117] "RemoveContainer" containerID="2094fd3153577c010182a75fe0f6a1565cc331ae525fd1287db5df2a1c4ad611" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.986646 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" event={"ID":"f32095da-1fdc-4d52-b082-98b39652cdc6","Type":"ContainerDied","Data":"2094fd3153577c010182a75fe0f6a1565cc331ae525fd1287db5df2a1c4ad611"} Nov 25 20:51:18 crc kubenswrapper[4983]: E1125 20:51:18.986965 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-5cf7cd9d4-bwfnd_openstack-operators(f32095da-1fdc-4d52-b082-98b39652cdc6)\"" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" podUID="f32095da-1fdc-4d52-b082-98b39652cdc6" Nov 25 20:51:18 crc kubenswrapper[4983]: I1125 20:51:18.998162 4983 generic.go:334] "Generic (PLEG): container finished" podID="e5edd26f-9ffb-4be8-86c1-99d32e812816" containerID="1a01b9904aaa3423a2eaf3d09ebb9c232e52fa9cdd9a996360b637a149ebe722" exitCode=1 Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.000045 4983 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.000101 4983 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.000812 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" event={"ID":"e5edd26f-9ffb-4be8-86c1-99d32e812816","Type":"ContainerDied","Data":"1a01b9904aaa3423a2eaf3d09ebb9c232e52fa9cdd9a996360b637a149ebe722"} Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.003418 4983 scope.go:117] "RemoveContainer" containerID="1a01b9904aaa3423a2eaf3d09ebb9c232e52fa9cdd9a996360b637a149ebe722" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.051784 4983 scope.go:117] "RemoveContainer" containerID="e566187b30e83938355b3a1c52c572bdd1b1cfc3f9b6770bdb9fd69f8e5862bf" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.084530 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d5855e40-3baa-4a2c-8a96-1f6ee9fd4ccc" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.184129 4983 scope.go:117] "RemoveContainer" containerID="29eeb441878fb015c597ffc632d17650b5695be6061bb569dcda612a581e1f7a" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.243823 4983 scope.go:117] "RemoveContainer" containerID="75e7c1267f1210f5dbccd40298f1b4a84918450298e182c51176721003d9c049" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.271039 4983 scope.go:117] "RemoveContainer" containerID="176adebbbdb058104ae2f7fa1a5aea8af5c6adbdce422a25b0c6329c94f235e0" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.300916 4983 scope.go:117] "RemoveContainer" containerID="c3ecbcadc6d6f9fc996b0dd303bd78d99c5d859640ceaeb5335bbc2a12cfa2ad" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.361478 4983 scope.go:117] "RemoveContainer" containerID="be1b255d5612c48700a605301872406c9c659670a1857d3d42e4354bbf4b2a78" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.386648 4983 scope.go:117] "RemoveContainer" containerID="0ae35c17f16deec7e5c15202368b42dccc2aef10cbcd2577e7843e7578e5014c" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.415242 4983 scope.go:117] "RemoveContainer" containerID="255489602e078ff1e0b16c370edc97cce3639ec4214ef4d187a7e949317efcff" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.444454 4983 scope.go:117] "RemoveContainer" containerID="ba2110f83a69a10255fcbf1e45c1bc545fbddea3d4b5b7c270c79a5c7432973b" Nov 25 20:51:19 crc kubenswrapper[4983]: I1125 20:51:19.467952 4983 scope.go:117] "RemoveContainer" containerID="abf9dda9597a02e3fb131d5ba1d4d0061055863180d91cd3dea2b56392151776" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.014029 4983 generic.go:334] "Generic (PLEG): container finished" podID="00a7db78-81a7-481d-a20e-135c60e139e3" containerID="4421224c7ebb08cbd300be1936b74006b2fde0b8e43724a7e0ebf4cd3f8df096" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.014137 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" event={"ID":"00a7db78-81a7-481d-a20e-135c60e139e3","Type":"ContainerDied","Data":"4421224c7ebb08cbd300be1936b74006b2fde0b8e43724a7e0ebf4cd3f8df096"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.014232 4983 scope.go:117] "RemoveContainer" containerID="c3fb31f1b338ebf2c6933c9cd12a13b813bde606aa2d327b8486df26a9a1159e" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.015476 4983 scope.go:117] "RemoveContainer" containerID="4421224c7ebb08cbd300be1936b74006b2fde0b8e43724a7e0ebf4cd3f8df096" Nov 25 20:51:20 crc kubenswrapper[4983]: E1125 20:51:20.016224 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-955677c94-lzn84_openstack-operators(00a7db78-81a7-481d-a20e-135c60e139e3)\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" podUID="00a7db78-81a7-481d-a20e-135c60e139e3" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.017510 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ec6aefb-824e-4248-ac00-c1d0b526edc6" containerID="f3c6fc8c1840d8e98b8185fd5d3f03c7e641009cf3daa9c8f72ddb04f016b9e4" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.017596 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" event={"ID":"1ec6aefb-824e-4248-ac00-c1d0b526edc6","Type":"ContainerDied","Data":"f3c6fc8c1840d8e98b8185fd5d3f03c7e641009cf3daa9c8f72ddb04f016b9e4"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.019035 4983 scope.go:117] "RemoveContainer" containerID="f3c6fc8c1840d8e98b8185fd5d3f03c7e641009cf3daa9c8f72ddb04f016b9e4" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.034396 4983 generic.go:334] "Generic (PLEG): container finished" podID="e5edd26f-9ffb-4be8-86c1-99d32e812816" containerID="c01e4e36ac4396bea45ed080266594b5adc85a27a286000a4b96139b2c089e98" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.034682 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" event={"ID":"e5edd26f-9ffb-4be8-86c1-99d32e812816","Type":"ContainerDied","Data":"c01e4e36ac4396bea45ed080266594b5adc85a27a286000a4b96139b2c089e98"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.035794 4983 scope.go:117] "RemoveContainer" containerID="c01e4e36ac4396bea45ed080266594b5adc85a27a286000a4b96139b2c089e98" Nov 25 20:51:20 crc kubenswrapper[4983]: E1125 20:51:20.036105 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-7b4567c7cf-fchv4_openstack-operators(e5edd26f-9ffb-4be8-86c1-99d32e812816)\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" podUID="e5edd26f-9ffb-4be8-86c1-99d32e812816" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.039541 4983 generic.go:334] "Generic (PLEG): container finished" podID="92f1d8fa-69cf-49c3-a616-82a185ff8dd5" containerID="1708c9a77068dfec18ed8730dc47b4dcc63fcd4eb60fd0dcaa6ea3fe29af7859" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.040033 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" event={"ID":"92f1d8fa-69cf-49c3-a616-82a185ff8dd5","Type":"ContainerDied","Data":"1708c9a77068dfec18ed8730dc47b4dcc63fcd4eb60fd0dcaa6ea3fe29af7859"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.040914 4983 scope.go:117] "RemoveContainer" containerID="1708c9a77068dfec18ed8730dc47b4dcc63fcd4eb60fd0dcaa6ea3fe29af7859" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.045533 4983 generic.go:334] "Generic (PLEG): container finished" podID="48b3567f-5b1a-4f14-891c-775c05e2d768" containerID="3e9d64d65e58ea7df2d9124a0951eb9ee7d90f8ce11b33d384241318847d1139" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.045624 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" event={"ID":"48b3567f-5b1a-4f14-891c-775c05e2d768","Type":"ContainerDied","Data":"3e9d64d65e58ea7df2d9124a0951eb9ee7d90f8ce11b33d384241318847d1139"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.046530 4983 scope.go:117] "RemoveContainer" containerID="3e9d64d65e58ea7df2d9124a0951eb9ee7d90f8ce11b33d384241318847d1139" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.054150 4983 generic.go:334] "Generic (PLEG): container finished" podID="cf765330-a0f9-4603-a92b-4aec8feaeafb" containerID="6db4f7718946bdb494f6d3f9c8048577aea80b535d5af70bbebcfeff8ee9d42f" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.054185 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" event={"ID":"cf765330-a0f9-4603-a92b-4aec8feaeafb","Type":"ContainerDied","Data":"6db4f7718946bdb494f6d3f9c8048577aea80b535d5af70bbebcfeff8ee9d42f"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.054625 4983 scope.go:117] "RemoveContainer" containerID="6db4f7718946bdb494f6d3f9c8048577aea80b535d5af70bbebcfeff8ee9d42f" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.059438 4983 generic.go:334] "Generic (PLEG): container finished" podID="da827172-6e3a-42a7-814c-cdfcc18d48d6" containerID="43d73d322ace9f5e5257f02c32c895eacc10e4b59bebab41918aed7b66b3669a" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.059488 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" event={"ID":"da827172-6e3a-42a7-814c-cdfcc18d48d6","Type":"ContainerDied","Data":"43d73d322ace9f5e5257f02c32c895eacc10e4b59bebab41918aed7b66b3669a"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.060775 4983 scope.go:117] "RemoveContainer" containerID="43d73d322ace9f5e5257f02c32c895eacc10e4b59bebab41918aed7b66b3669a" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.063345 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" event={"ID":"ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042","Type":"ContainerStarted","Data":"a643af970ce9cbc075b92d6df058ff3afcefc3789b84bc466fd5effd11ce33b8"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.063711 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.068823 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" event={"ID":"668ad5ef-ec7f-4239-94c5-8bb868f653ce","Type":"ContainerStarted","Data":"87ecd181a2c7035a8ab38fac75903b7af06323b49960850c3bbafcbb8c1fb426"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.069037 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.075120 4983 generic.go:334] "Generic (PLEG): container finished" podID="2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98" containerID="e247a797cf007e4baa3d65b21cface4683168422ec0bec2f754f806a1b887169" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.075308 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" event={"ID":"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98","Type":"ContainerDied","Data":"e247a797cf007e4baa3d65b21cface4683168422ec0bec2f754f806a1b887169"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.076129 4983 scope.go:117] "RemoveContainer" containerID="e247a797cf007e4baa3d65b21cface4683168422ec0bec2f754f806a1b887169" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.090076 4983 generic.go:334] "Generic (PLEG): container finished" podID="9d7c78e4-4890-4527-9db4-131842750615" containerID="ddf3f14ea4a14a862857cfa9fe57af9b6774bdb49fca32cbceb05aeaa9129385" exitCode=1 Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.090171 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" event={"ID":"9d7c78e4-4890-4527-9db4-131842750615","Type":"ContainerDied","Data":"ddf3f14ea4a14a862857cfa9fe57af9b6774bdb49fca32cbceb05aeaa9129385"} Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.090672 4983 scope.go:117] "RemoveContainer" containerID="49508c5f00ea8af8db407e6c8b401bb4480e7e5022f7a7bcac3493405b04fc7d" Nov 25 20:51:20 crc kubenswrapper[4983]: E1125 20:51:20.091024 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(ae259426-d08e-4d8f-b3e7-f06847f1c2da)\"" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.091702 4983 scope.go:117] "RemoveContainer" containerID="ddf3f14ea4a14a862857cfa9fe57af9b6774bdb49fca32cbceb05aeaa9129385" Nov 25 20:51:20 crc kubenswrapper[4983]: E1125 20:51:20.092040 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-dj7nt_openstack-operators(9d7c78e4-4890-4527-9db4-131842750615)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" podUID="9d7c78e4-4890-4527-9db4-131842750615" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.100696 4983 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.100727 4983 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e409ec05-8a05-432f-ad38-8f7f3591bc3b" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.101657 4983 scope.go:117] "RemoveContainer" containerID="2094fd3153577c010182a75fe0f6a1565cc331ae525fd1287db5df2a1c4ad611" Nov 25 20:51:20 crc kubenswrapper[4983]: E1125 20:51:20.101905 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-5cf7cd9d4-bwfnd_openstack-operators(f32095da-1fdc-4d52-b082-98b39652cdc6)\"" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" podUID="f32095da-1fdc-4d52-b082-98b39652cdc6" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.162507 4983 scope.go:117] "RemoveContainer" containerID="1a01b9904aaa3423a2eaf3d09ebb9c232e52fa9cdd9a996360b637a149ebe722" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.221210 4983 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="d5855e40-3baa-4a2c-8a96-1f6ee9fd4ccc" Nov 25 20:51:20 crc kubenswrapper[4983]: I1125 20:51:20.282869 4983 scope.go:117] "RemoveContainer" containerID="a51cd56a7a6390cdbb05b64926ccaaf27335f93ef9186bb6b69d703b3cdead49" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.111575 4983 generic.go:334] "Generic (PLEG): container finished" podID="cf765330-a0f9-4603-a92b-4aec8feaeafb" containerID="a4edb1a159b34f9ab58acd319c513c3bef9acc8142ab40e903462795cde216fe" exitCode=1 Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.111584 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" event={"ID":"cf765330-a0f9-4603-a92b-4aec8feaeafb","Type":"ContainerDied","Data":"a4edb1a159b34f9ab58acd319c513c3bef9acc8142ab40e903462795cde216fe"} Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.111695 4983 scope.go:117] "RemoveContainer" containerID="6db4f7718946bdb494f6d3f9c8048577aea80b535d5af70bbebcfeff8ee9d42f" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.112317 4983 scope.go:117] "RemoveContainer" containerID="a4edb1a159b34f9ab58acd319c513c3bef9acc8142ab40e903462795cde216fe" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.112718 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-6b7f75547b-b9lnt_openstack-operators(cf765330-a0f9-4603-a92b-4aec8feaeafb)\"" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" podUID="cf765330-a0f9-4603-a92b-4aec8feaeafb" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.117177 4983 generic.go:334] "Generic (PLEG): container finished" podID="92f1d8fa-69cf-49c3-a616-82a185ff8dd5" containerID="cb0e4ac0087624140f14a69f3e40ebbeb5a653b462f49355e06bdd4e337fd763" exitCode=1 Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.117261 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" event={"ID":"92f1d8fa-69cf-49c3-a616-82a185ff8dd5","Type":"ContainerDied","Data":"cb0e4ac0087624140f14a69f3e40ebbeb5a653b462f49355e06bdd4e337fd763"} Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.118319 4983 scope.go:117] "RemoveContainer" containerID="cb0e4ac0087624140f14a69f3e40ebbeb5a653b462f49355e06bdd4e337fd763" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.118633 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-b7bb74d9f-m9bbx_openstack-operators(92f1d8fa-69cf-49c3-a616-82a185ff8dd5)\"" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" podUID="92f1d8fa-69cf-49c3-a616-82a185ff8dd5" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.122136 4983 generic.go:334] "Generic (PLEG): container finished" podID="48b3567f-5b1a-4f14-891c-775c05e2d768" containerID="4023ca5f6518b2912f3f5bfcb195f558ee81cff86ed8643070dbcf51eac1e40b" exitCode=1 Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.122185 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" event={"ID":"48b3567f-5b1a-4f14-891c-775c05e2d768","Type":"ContainerDied","Data":"4023ca5f6518b2912f3f5bfcb195f558ee81cff86ed8643070dbcf51eac1e40b"} Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.123332 4983 scope.go:117] "RemoveContainer" containerID="4023ca5f6518b2912f3f5bfcb195f558ee81cff86ed8643070dbcf51eac1e40b" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.123826 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-5b77f656f-t5knb_openstack-operators(48b3567f-5b1a-4f14-891c-775c05e2d768)\"" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" podUID="48b3567f-5b1a-4f14-891c-775c05e2d768" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.127962 4983 generic.go:334] "Generic (PLEG): container finished" podID="1ec6aefb-824e-4248-ac00-c1d0b526edc6" containerID="92039aeeede095e27e03623279f61c5411873819a0664b6c70529c398ed7a8a2" exitCode=1 Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.128028 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" event={"ID":"1ec6aefb-824e-4248-ac00-c1d0b526edc6","Type":"ContainerDied","Data":"92039aeeede095e27e03623279f61c5411873819a0664b6c70529c398ed7a8a2"} Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.128613 4983 scope.go:117] "RemoveContainer" containerID="92039aeeede095e27e03623279f61c5411873819a0664b6c70529c398ed7a8a2" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.128903 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-7b64f4fb85-nf6tq_openstack-operators(1ec6aefb-824e-4248-ac00-c1d0b526edc6)\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" podUID="1ec6aefb-824e-4248-ac00-c1d0b526edc6" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.135761 4983 generic.go:334] "Generic (PLEG): container finished" podID="da827172-6e3a-42a7-814c-cdfcc18d48d6" containerID="91e0de69017674908598c4b2d8a0bd7d630c9c47211dc5a46efca446194449f5" exitCode=1 Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.135827 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" event={"ID":"da827172-6e3a-42a7-814c-cdfcc18d48d6","Type":"ContainerDied","Data":"91e0de69017674908598c4b2d8a0bd7d630c9c47211dc5a46efca446194449f5"} Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.136450 4983 scope.go:117] "RemoveContainer" containerID="91e0de69017674908598c4b2d8a0bd7d630c9c47211dc5a46efca446194449f5" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.136722 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-589cbd6b5b-xvxp7_openstack-operators(da827172-6e3a-42a7-814c-cdfcc18d48d6)\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" podUID="da827172-6e3a-42a7-814c-cdfcc18d48d6" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.151435 4983 generic.go:334] "Generic (PLEG): container finished" podID="2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98" containerID="ab0f20767af970beb93c6e990c7f9dbdef868319123d4b37e73fc6122b59e2c9" exitCode=1 Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.151548 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" event={"ID":"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98","Type":"ContainerDied","Data":"ab0f20767af970beb93c6e990c7f9dbdef868319123d4b37e73fc6122b59e2c9"} Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.152582 4983 scope.go:117] "RemoveContainer" containerID="ab0f20767af970beb93c6e990c7f9dbdef868319123d4b37e73fc6122b59e2c9" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.153428 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-5d499bf58b-f8bh4_openstack-operators(2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98)\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" podUID="2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.198777 4983 scope.go:117] "RemoveContainer" containerID="1708c9a77068dfec18ed8730dc47b4dcc63fcd4eb60fd0dcaa6ea3fe29af7859" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.258992 4983 scope.go:117] "RemoveContainer" containerID="3e9d64d65e58ea7df2d9124a0951eb9ee7d90f8ce11b33d384241318847d1139" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.319406 4983 scope.go:117] "RemoveContainer" containerID="f3c6fc8c1840d8e98b8185fd5d3f03c7e641009cf3daa9c8f72ddb04f016b9e4" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.393186 4983 scope.go:117] "RemoveContainer" containerID="43d73d322ace9f5e5257f02c32c895eacc10e4b59bebab41918aed7b66b3669a" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.425225 4983 scope.go:117] "RemoveContainer" containerID="e247a797cf007e4baa3d65b21cface4683168422ec0bec2f754f806a1b887169" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.651170 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.651267 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.671330 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.671379 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.690600 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.690666 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.691578 4983 scope.go:117] "RemoveContainer" containerID="4421224c7ebb08cbd300be1936b74006b2fde0b8e43724a7e0ebf4cd3f8df096" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.691956 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-955677c94-lzn84_openstack-operators(00a7db78-81a7-481d-a20e-135c60e139e3)\"" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" podUID="00a7db78-81a7-481d-a20e-135c60e139e3" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.738182 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.738233 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.761016 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.761288 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.784020 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.784416 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.784917 4983 scope.go:117] "RemoveContainer" containerID="e01722029263149f6171ba8721e1071527af2b8fb4e91e0b577222b363352dc7" Nov 25 20:51:21 crc kubenswrapper[4983]: E1125 20:51:21.785166 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-5d494799bf-cctnq_openstack-operators(72f1d28e-26ff-43d3-bd93-54c21d9cdd70)\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" podUID="72f1d28e-26ff-43d3-bd93-54c21d9cdd70" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.930825 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:51:21 crc kubenswrapper[4983]: I1125 20:51:21.931811 4983 scope.go:117] "RemoveContainer" containerID="bc7c81b4cbadb4eb3ea3bba28a616a384d0d45635730442b3cae70467dfecbb9" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.061126 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.061187 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.062205 4983 scope.go:117] "RemoveContainer" containerID="c01e4e36ac4396bea45ed080266594b5adc85a27a286000a4b96139b2c089e98" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.062514 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-7b4567c7cf-fchv4_openstack-operators(e5edd26f-9ffb-4be8-86c1-99d32e812816)\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" podUID="e5edd26f-9ffb-4be8-86c1-99d32e812816" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.064660 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.064711 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.065583 4983 scope.go:117] "RemoveContainer" containerID="a5aec0481055d47f0b8e60ee7cc18b2065fede12afd801d4cfc4ff25be19edcb" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.065908 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_openstack-operators(afff7723-36e3-42ae-9fac-9f8fdb86d839)\"" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" podUID="afff7723-36e3-42ae-9fac-9f8fdb86d839" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.076983 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.077204 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.115388 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.116671 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.118886 4983 scope.go:117] "RemoveContainer" containerID="629fb45e6dd5aed03ad6f8e0d59e5a710a0a4915571be20834e904f1a9919661" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.119968 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-6fdcddb789-ljpb8_openstack-operators(badb10c7-4c8c-42c4-b481-221377fa7255)\"" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" podUID="badb10c7-4c8c-42c4-b481-221377fa7255" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.150168 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.150320 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.151458 4983 scope.go:117] "RemoveContainer" containerID="ddf3f14ea4a14a862857cfa9fe57af9b6774bdb49fca32cbceb05aeaa9129385" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.151822 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-dj7nt_openstack-operators(9d7c78e4-4890-4527-9db4-131842750615)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" podUID="9d7c78e4-4890-4527-9db4-131842750615" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.163249 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.166075 4983 scope.go:117] "RemoveContainer" containerID="1630196c9d10cb5193e4f92c7dee14b3ff3b2cb8bf68b24ed51bd9d02e166dc5" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.169142 4983 scope.go:117] "RemoveContainer" containerID="ab0f20767af970beb93c6e990c7f9dbdef868319123d4b37e73fc6122b59e2c9" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.169385 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-5d499bf58b-f8bh4_openstack-operators(2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98)\"" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" podUID="2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.175017 4983 scope.go:117] "RemoveContainer" containerID="a4edb1a159b34f9ab58acd319c513c3bef9acc8142ab40e903462795cde216fe" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.175260 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-6b7f75547b-b9lnt_openstack-operators(cf765330-a0f9-4603-a92b-4aec8feaeafb)\"" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" podUID="cf765330-a0f9-4603-a92b-4aec8feaeafb" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.184143 4983 scope.go:117] "RemoveContainer" containerID="4023ca5f6518b2912f3f5bfcb195f558ee81cff86ed8643070dbcf51eac1e40b" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.184599 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-5b77f656f-t5knb_openstack-operators(48b3567f-5b1a-4f14-891c-775c05e2d768)\"" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" podUID="48b3567f-5b1a-4f14-891c-775c05e2d768" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.205154 4983 scope.go:117] "RemoveContainer" containerID="92039aeeede095e27e03623279f61c5411873819a0664b6c70529c398ed7a8a2" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.205489 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-7b64f4fb85-nf6tq_openstack-operators(1ec6aefb-824e-4248-ac00-c1d0b526edc6)\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" podUID="1ec6aefb-824e-4248-ac00-c1d0b526edc6" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.209716 4983 scope.go:117] "RemoveContainer" containerID="91e0de69017674908598c4b2d8a0bd7d630c9c47211dc5a46efca446194449f5" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.210296 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-589cbd6b5b-xvxp7_openstack-operators(da827172-6e3a-42a7-814c-cdfcc18d48d6)\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" podUID="da827172-6e3a-42a7-814c-cdfcc18d48d6" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.210382 4983 scope.go:117] "RemoveContainer" containerID="e01722029263149f6171ba8721e1071527af2b8fb4e91e0b577222b363352dc7" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.210975 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-5d494799bf-cctnq_openstack-operators(72f1d28e-26ff-43d3-bd93-54c21d9cdd70)\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" podUID="72f1d28e-26ff-43d3-bd93-54c21d9cdd70" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.237042 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.237177 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.238022 4983 scope.go:117] "RemoveContainer" containerID="d3d45ee613969be8527633716da9d5b63f317a00ba1f42f38370d5db78bf0479" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.238257 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-56897c768d-zc5rq_openstack-operators(d7302bdd-d74f-4d95-a354-42fcd52bf22e)\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" podUID="d7302bdd-d74f-4d95-a354-42fcd52bf22e" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.258804 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.259199 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.260129 4983 scope.go:117] "RemoveContainer" containerID="37df195dbd11cf2f9caa964fb201a4722d0743043b29a5a01cdeedfa46bad6be" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.260394 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-57988cc5b5-mhjtj_openstack-operators(64141c1d-799a-4d72-aa99-e54975052879)\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" podUID="64141c1d-799a-4d72-aa99-e54975052879" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.294300 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.294437 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.294498 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.296127 4983 scope.go:117] "RemoveContainer" containerID="cb0e4ac0087624140f14a69f3e40ebbeb5a653b462f49355e06bdd4e337fd763" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.296473 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-b7bb74d9f-m9bbx_openstack-operators(92f1d8fa-69cf-49c3-a616-82a185ff8dd5)\"" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" podUID="92f1d8fa-69cf-49c3-a616-82a185ff8dd5" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.296668 4983 scope.go:117] "RemoveContainer" containerID="1e4ca543609b941cf9928a57e8833b9545f0d8d1a38f1f98efde51f9c9ca48e8" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.296680 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.296914 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-d77b94747-4c95t_openstack-operators(5b14316c-9639-4934-a5e9-5381d2797ef5)\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podUID="5b14316c-9639-4934-a5e9-5381d2797ef5" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.421655 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.421730 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:51:22 crc kubenswrapper[4983]: I1125 20:51:22.422778 4983 scope.go:117] "RemoveContainer" containerID="2e76d837019e461a6647697fea0d99d388a7fa5c0c00db18a6f478a0dba141e4" Nov 25 20:51:22 crc kubenswrapper[4983]: E1125 20:51:22.423851 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-656dcb59d4-rpfhz_openstack-operators(1e439ca1-98f3-4650-96da-1e4c1b2da37e)\"" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" podUID="1e439ca1-98f3-4650-96da-1e4c1b2da37e" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.009243 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.226054 4983 generic.go:334] "Generic (PLEG): container finished" podID="e1668e7f-55bb-415c-b378-1c70483b30a6" containerID="06f16a26be06d8dbdf08ca4719bb65ac69f57eaeb25aa09f922b73535ad349ee" exitCode=1 Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.226141 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerDied","Data":"06f16a26be06d8dbdf08ca4719bb65ac69f57eaeb25aa09f922b73535ad349ee"} Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.226184 4983 scope.go:117] "RemoveContainer" containerID="bc7c81b4cbadb4eb3ea3bba28a616a384d0d45635730442b3cae70467dfecbb9" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.227146 4983 scope.go:117] "RemoveContainer" containerID="06f16a26be06d8dbdf08ca4719bb65ac69f57eaeb25aa09f922b73535ad349ee" Nov 25 20:51:23 crc kubenswrapper[4983]: E1125 20:51:23.227421 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-67cb4dc6d4-9zpxb_openstack-operators(e1668e7f-55bb-415c-b378-1c70483b30a6)\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.233036 4983 generic.go:334] "Generic (PLEG): container finished" podID="a096f840-35b3-48c1-8c0e-762b67b8bde0" containerID="a3ec9be40fd0d325e5acc86618d6fa17481fbe10e7e86fb36211980b1bbd3c1c" exitCode=1 Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.233165 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerDied","Data":"a3ec9be40fd0d325e5acc86618d6fa17481fbe10e7e86fb36211980b1bbd3c1c"} Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.233986 4983 scope.go:117] "RemoveContainer" containerID="cb0e4ac0087624140f14a69f3e40ebbeb5a653b462f49355e06bdd4e337fd763" Nov 25 20:51:23 crc kubenswrapper[4983]: E1125 20:51:23.234416 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-b7bb74d9f-m9bbx_openstack-operators(92f1d8fa-69cf-49c3-a616-82a185ff8dd5)\"" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" podUID="92f1d8fa-69cf-49c3-a616-82a185ff8dd5" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.234450 4983 scope.go:117] "RemoveContainer" containerID="1e4ca543609b941cf9928a57e8833b9545f0d8d1a38f1f98efde51f9c9ca48e8" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.234855 4983 scope.go:117] "RemoveContainer" containerID="a3ec9be40fd0d325e5acc86618d6fa17481fbe10e7e86fb36211980b1bbd3c1c" Nov 25 20:51:23 crc kubenswrapper[4983]: E1125 20:51:23.235027 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-d77b94747-4c95t_openstack-operators(5b14316c-9639-4934-a5e9-5381d2797ef5)\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" podUID="5b14316c-9639-4934-a5e9-5381d2797ef5" Nov 25 20:51:23 crc kubenswrapper[4983]: E1125 20:51:23.235220 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-64cdc6ff96-p8q9g_openstack-operators(a096f840-35b3-48c1-8c0e-762b67b8bde0)\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.235637 4983 scope.go:117] "RemoveContainer" containerID="d3d45ee613969be8527633716da9d5b63f317a00ba1f42f38370d5db78bf0479" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.235809 4983 scope.go:117] "RemoveContainer" containerID="4023ca5f6518b2912f3f5bfcb195f558ee81cff86ed8643070dbcf51eac1e40b" Nov 25 20:51:23 crc kubenswrapper[4983]: E1125 20:51:23.236232 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-56897c768d-zc5rq_openstack-operators(d7302bdd-d74f-4d95-a354-42fcd52bf22e)\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" podUID="d7302bdd-d74f-4d95-a354-42fcd52bf22e" Nov 25 20:51:23 crc kubenswrapper[4983]: E1125 20:51:23.236703 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-5b77f656f-t5knb_openstack-operators(48b3567f-5b1a-4f14-891c-775c05e2d768)\"" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" podUID="48b3567f-5b1a-4f14-891c-775c05e2d768" Nov 25 20:51:23 crc kubenswrapper[4983]: I1125 20:51:23.348194 4983 scope.go:117] "RemoveContainer" containerID="1630196c9d10cb5193e4f92c7dee14b3ff3b2cb8bf68b24ed51bd9d02e166dc5" Nov 25 20:51:24 crc kubenswrapper[4983]: I1125 20:51:24.882641 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 20:51:24 crc kubenswrapper[4983]: I1125 20:51:24.883336 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 20:51:24 crc kubenswrapper[4983]: I1125 20:51:24.884659 4983 scope.go:117] "RemoveContainer" containerID="49508c5f00ea8af8db407e6c8b401bb4480e7e5022f7a7bcac3493405b04fc7d" Nov 25 20:51:24 crc kubenswrapper[4983]: E1125 20:51:24.885094 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(ae259426-d08e-4d8f-b3e7-f06847f1c2da)\"" pod="openstack/kube-state-metrics-0" podUID="ae259426-d08e-4d8f-b3e7-f06847f1c2da" Nov 25 20:51:26 crc kubenswrapper[4983]: I1125 20:51:26.858538 4983 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-hkqcz" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.087499 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.178188 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.371242 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.437808 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.493973 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.495086 4983 scope.go:117] "RemoveContainer" containerID="74aa735cc137160b6b2e9aa3fb62381a29dfcb859e028601080e7caef696e191" Nov 25 20:51:27 crc kubenswrapper[4983]: E1125 20:51:27.495513 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-57548d458d-qlm9k_openstack-operators(0d3d657c-e179-43c7-abca-c37f8396d1cd)\"" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" podUID="0d3d657c-e179-43c7-abca-c37f8396d1cd" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.652320 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.739115 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 20:51:27 crc kubenswrapper[4983]: I1125 20:51:27.793405 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.004890 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.103504 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.104982 4983 scope.go:117] "RemoveContainer" containerID="2094fd3153577c010182a75fe0f6a1565cc331ae525fd1287db5df2a1c4ad611" Nov 25 20:51:28 crc kubenswrapper[4983]: E1125 20:51:28.105419 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-5cf7cd9d4-bwfnd_openstack-operators(f32095da-1fdc-4d52-b082-98b39652cdc6)\"" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" podUID="f32095da-1fdc-4d52-b082-98b39652cdc6" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.166843 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.377538 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bcf8p" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.469640 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.660702 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.685258 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-67ndd" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.740623 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.803919 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 20:51:28 crc kubenswrapper[4983]: I1125 20:51:28.978937 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-blkf6" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.016646 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.095826 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.143655 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.228788 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.326298 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.359396 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.433656 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.465051 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-lp2vq" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.484085 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.485265 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.489676 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.552762 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jgf4r" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.642378 4983 scope.go:117] "RemoveContainer" containerID="480172d063f01071881eb46657e72676ebccdded430b4849f96406415a536761" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.659953 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.664857 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.796187 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.869735 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-q4pfh" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.911917 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 20:51:29 crc kubenswrapper[4983]: I1125 20:51:29.948635 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-nw95h" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.032507 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jlx4p" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.032593 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.052166 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.058192 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.081100 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.101582 4983 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lvvxz" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.176315 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.189179 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.224982 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.269734 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.281814 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.330321 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.334351 4983 generic.go:334] "Generic (PLEG): container finished" podID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" containerID="bc5f14a088776850f3047677fee5b4b01b60dbcb5e1258a9928c672d1fd76bd8" exitCode=1 Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.334396 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" event={"ID":"74baeb7c-21f0-4d1c-9a61-7694f59cc161","Type":"ContainerDied","Data":"bc5f14a088776850f3047677fee5b4b01b60dbcb5e1258a9928c672d1fd76bd8"} Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.334430 4983 scope.go:117] "RemoveContainer" containerID="480172d063f01071881eb46657e72676ebccdded430b4849f96406415a536761" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.335425 4983 scope.go:117] "RemoveContainer" containerID="bc5f14a088776850f3047677fee5b4b01b60dbcb5e1258a9928c672d1fd76bd8" Nov 25 20:51:30 crc kubenswrapper[4983]: E1125 20:51:30.335916 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6dcc87d69d-p8fwj_metallb-system(74baeb7c-21f0-4d1c-9a61-7694f59cc161)\"" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.364395 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.364602 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.369684 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.377356 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.386670 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.450229 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.481621 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.509641 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.620744 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.633392 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.682887 4983 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.751001 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.763260 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.787276 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.818514 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.841795 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.853813 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jvhsb" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.870095 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.876588 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.962525 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.965466 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.975722 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 20:51:30 crc kubenswrapper[4983]: I1125 20:51:30.993397 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.044752 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.112267 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.134577 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.139595 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-29652" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.153766 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.261871 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.262882 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-6jtlq" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.308672 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mt82l" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.319546 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.366179 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.396764 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.448717 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.457960 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.567935 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.577647 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.593426 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.605709 4983 scope.go:117] "RemoveContainer" containerID="2cf4876ee036e56235ae081269cb2aaad35bfdc9337d5bde808bfa444a977873" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.688976 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.691261 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.704678 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.755724 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.756051 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.756964 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ldxd7" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.761984 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.765286 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.791026 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lnbzd" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.813539 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.822386 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-78sw9" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.834254 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.836171 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hhcsg" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.870921 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.887715 4983 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-njkjp" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.918272 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.919301 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.929793 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.930712 4983 scope.go:117] "RemoveContainer" containerID="06f16a26be06d8dbdf08ca4719bb65ac69f57eaeb25aa09f922b73535ad349ee" Nov 25 20:51:31 crc kubenswrapper[4983]: E1125 20:51:31.931167 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-67cb4dc6d4-9zpxb_openstack-operators(e1668e7f-55bb-415c-b378-1c70483b30a6)\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" Nov 25 20:51:31 crc kubenswrapper[4983]: I1125 20:51:31.947156 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.004103 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.086264 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.108249 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.130247 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.135932 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.187718 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.188982 4983 scope.go:117] "RemoveContainer" containerID="a3ec9be40fd0d325e5acc86618d6fa17481fbe10e7e86fb36211980b1bbd3c1c" Nov 25 20:51:32 crc kubenswrapper[4983]: E1125 20:51:32.189316 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-64cdc6ff96-p8q9g_openstack-operators(a096f840-35b3-48c1-8c0e-762b67b8bde0)\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.226916 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.326448 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-lr7wt" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.333644 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.364479 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dt2hz" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.374783 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-bwf7d" event={"ID":"ff284fea-7792-40e1-8ede-f52412a6c014","Type":"ContainerStarted","Data":"01d534f2775e95f9a4de5a2175e9455706ac239bd24da2ec80e133acd2190c5d"} Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.417437 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.444893 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.562415 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.598104 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.604936 4983 scope.go:117] "RemoveContainer" containerID="a5aec0481055d47f0b8e60ee7cc18b2065fede12afd801d4cfc4ff25be19edcb" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.605151 4983 scope.go:117] "RemoveContainer" containerID="e01722029263149f6171ba8721e1071527af2b8fb4e91e0b577222b363352dc7" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.605891 4983 scope.go:117] "RemoveContainer" containerID="ddf3f14ea4a14a862857cfa9fe57af9b6774bdb49fca32cbceb05aeaa9129385" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.606297 4983 scope.go:117] "RemoveContainer" containerID="91e0de69017674908598c4b2d8a0bd7d630c9c47211dc5a46efca446194449f5" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.647017 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.679899 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.703028 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.715779 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.735584 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.740722 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.755976 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6b8dd87645-g89th" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.790332 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.800884 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.808795 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.837766 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.839427 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.840620 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.845704 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.880406 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.963120 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.984874 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-gvwgd" Nov 25 20:51:32 crc kubenswrapper[4983]: I1125 20:51:32.993612 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tmg8d" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.030010 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.039722 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.042246 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.047023 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.069696 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.132739 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.141709 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.151694 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.210129 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-65kd8" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.213896 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-28kzp" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.217167 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.219233 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.393151 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" event={"ID":"72f1d28e-26ff-43d3-bd93-54c21d9cdd70","Type":"ContainerStarted","Data":"dd6f5bff47efb5dbde4feea115805c9e71d680342b3a5bad4cdf54e2d09fc287"} Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.393654 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.399867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" event={"ID":"afff7723-36e3-42ae-9fac-9f8fdb86d839","Type":"ContainerStarted","Data":"0585383c2473cbf4a349dc84bd230e89ee6da9e2766f609cfd735e5fd43c13bc"} Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.400177 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.403403 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.404652 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" event={"ID":"9d7c78e4-4890-4527-9db4-131842750615","Type":"ContainerStarted","Data":"b5531b702b473760cd9ee5ce46e85cf626c4935f0fa2160189994c09c114c114"} Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.405174 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.414647 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" event={"ID":"da827172-6e3a-42a7-814c-cdfcc18d48d6","Type":"ContainerStarted","Data":"d28b6cb058fc06ec3c561705cd42461f8343e08c8b8997e8e6d29a9a1901d178"} Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.414970 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.435596 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.440157 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.472653 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.496018 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.600536 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2dxqn" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.604990 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.628747 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.637760 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.638697 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.648315 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.654611 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.737924 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.934822 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.942773 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nqzsf" Nov 25 20:51:33 crc kubenswrapper[4983]: I1125 20:51:33.999677 4983 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.001874 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.025009 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.119745 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.125324 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.128114 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ghzdx" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.135325 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.163582 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.195796 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.218990 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.281765 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.296774 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.352199 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.355180 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.364341 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.397988 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.404391 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.418099 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.444319 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.471624 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.503950 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.606333 4983 scope.go:117] "RemoveContainer" containerID="2e76d837019e461a6647697fea0d99d388a7fa5c0c00db18a6f478a0dba141e4" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.606768 4983 scope.go:117] "RemoveContainer" containerID="c01e4e36ac4396bea45ed080266594b5adc85a27a286000a4b96139b2c089e98" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.606983 4983 scope.go:117] "RemoveContainer" containerID="1e4ca543609b941cf9928a57e8833b9545f0d8d1a38f1f98efde51f9c9ca48e8" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.607676 4983 scope.go:117] "RemoveContainer" containerID="cb0e4ac0087624140f14a69f3e40ebbeb5a653b462f49355e06bdd4e337fd763" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.643040 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.648101 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hs4zc" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.650679 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.722044 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.728326 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.755867 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.778239 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.790004 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.793755 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.802286 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.802790 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lfvsl" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.813180 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.883280 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.884243 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.912808 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.941668 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.966173 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.971250 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 20:51:34 crc kubenswrapper[4983]: I1125 20:51:34.976235 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.043181 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.101325 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jkq2c" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.149976 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.206144 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.207207 4983 scope.go:117] "RemoveContainer" containerID="bc5f14a088776850f3047677fee5b4b01b60dbcb5e1258a9928c672d1fd76bd8" Nov 25 20:51:35 crc kubenswrapper[4983]: E1125 20:51:35.207524 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6dcc87d69d-p8fwj_metallb-system(74baeb7c-21f0-4d1c-9a61-7694f59cc161)\"" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.222812 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.280853 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.308258 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.341327 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.404492 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.458992 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" event={"ID":"5b14316c-9639-4934-a5e9-5381d2797ef5","Type":"ContainerStarted","Data":"7cb8f73cbc6b40068d084b15147017fd8024a43d14807b924f6836f528b419d7"} Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.459364 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.465705 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" event={"ID":"e5edd26f-9ffb-4be8-86c1-99d32e812816","Type":"ContainerStarted","Data":"180e2871c94e63c984f6cab07c92b021cbbc2e22b4028e4c562cf23279f6447f"} Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.466235 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.473839 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" event={"ID":"92f1d8fa-69cf-49c3-a616-82a185ff8dd5","Type":"ContainerStarted","Data":"c54f0d641eb95bb89220739f57f1a8909008c72a5e6aac3e3450ef514acd7103"} Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.474596 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.476640 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" event={"ID":"1e439ca1-98f3-4650-96da-1e4c1b2da37e","Type":"ContainerStarted","Data":"1f3aa1392235473a31cd9c7868900cf445cb3700244b9807253dfea816a9d09d"} Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.476912 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.537314 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.545120 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.585380 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.603367 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.605654 4983 scope.go:117] "RemoveContainer" containerID="92039aeeede095e27e03623279f61c5411873819a0664b6c70529c398ed7a8a2" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.605745 4983 scope.go:117] "RemoveContainer" containerID="629fb45e6dd5aed03ad6f8e0d59e5a710a0a4915571be20834e904f1a9919661" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.605846 4983 scope.go:117] "RemoveContainer" containerID="4421224c7ebb08cbd300be1936b74006b2fde0b8e43724a7e0ebf4cd3f8df096" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.642002 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.645784 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.680790 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-9v9qs" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.737678 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.743010 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.806767 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.817310 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.818504 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.857960 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.872033 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.921058 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dcmsm" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.957459 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 20:51:35 crc kubenswrapper[4983]: I1125 20:51:35.976725 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.030787 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.057937 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pmbck" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.129142 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.183722 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.241417 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.252759 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.308800 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.310750 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-x6tpj" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.341156 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.358449 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.362790 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.384813 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.406816 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.412720 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.470099 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.489010 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.491469 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" event={"ID":"00a7db78-81a7-481d-a20e-135c60e139e3","Type":"ContainerStarted","Data":"77b745c3bcdcd910749b9fd8b960873f448f2b83da2fe130e7c43cd03ed9e81f"} Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.492446 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.495632 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" event={"ID":"badb10c7-4c8c-42c4-b481-221377fa7255","Type":"ContainerStarted","Data":"9f85398a8aad347f1eaf515bd65241651bb2e730b1ebe455201dbaa5126067b3"} Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.495835 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.498324 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" event={"ID":"1ec6aefb-824e-4248-ac00-c1d0b526edc6","Type":"ContainerStarted","Data":"a5e9cca53197329b0b20102335de33bebbb6265c88882ef52b86be1633867aee"} Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.520719 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.551755 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.575536 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.582252 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.606080 4983 scope.go:117] "RemoveContainer" containerID="37df195dbd11cf2f9caa964fb201a4722d0743043b29a5a01cdeedfa46bad6be" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.606130 4983 scope.go:117] "RemoveContainer" containerID="ab0f20767af970beb93c6e990c7f9dbdef868319123d4b37e73fc6122b59e2c9" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.606199 4983 scope.go:117] "RemoveContainer" containerID="49508c5f00ea8af8db407e6c8b401bb4480e7e5022f7a7bcac3493405b04fc7d" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.649285 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qs9rd" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.657263 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-x68nk" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.687299 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.692597 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.709817 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.747234 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.769329 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.787388 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.800067 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.826162 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.852196 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2gt5j" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.903046 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.917994 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.927080 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.943321 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-cxxp7" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.944944 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.958432 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-47hsh" Nov 25 20:51:36 crc kubenswrapper[4983]: I1125 20:51:36.977398 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.012684 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.037287 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.063703 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-6gbp8" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.087974 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.192407 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.202312 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.225257 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.234802 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.305595 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.316740 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.318123 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.333287 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.398679 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.424064 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.451988 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.490938 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.492756 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dkrf2" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.494003 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.494898 4983 scope.go:117] "RemoveContainer" containerID="74aa735cc137160b6b2e9aa3fb62381a29dfcb859e028601080e7caef696e191" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.514580 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" event={"ID":"64141c1d-799a-4d72-aa99-e54975052879","Type":"ContainerStarted","Data":"83b72ee320447ac77c5c8c4fd9a03fb5102088d3fd97b58bea86463f2f91e8f3"} Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.514989 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.518421 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" event={"ID":"2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98","Type":"ContainerStarted","Data":"448e5e031d255649de78754bb74e3df402a91ef67361fc6ac60d54c08381e5b5"} Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.518630 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.518860 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.522749 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ae259426-d08e-4d8f-b3e7-f06847f1c2da","Type":"ContainerStarted","Data":"1b29cf9d1425d75e0c4931742b2c31af5f839394a72ffae07a8fb86fa0d3052e"} Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.523438 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.599136 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.605534 4983 scope.go:117] "RemoveContainer" containerID="4023ca5f6518b2912f3f5bfcb195f558ee81cff86ed8643070dbcf51eac1e40b" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.606131 4983 scope.go:117] "RemoveContainer" containerID="d3d45ee613969be8527633716da9d5b63f317a00ba1f42f38370d5db78bf0479" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.606418 4983 scope.go:117] "RemoveContainer" containerID="a4edb1a159b34f9ab58acd319c513c3bef9acc8142ab40e903462795cde216fe" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.624661 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.671129 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.671403 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.742906 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.811429 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.814654 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.863792 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.887925 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.896706 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.919588 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.931172 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.936943 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 20:51:37 crc kubenswrapper[4983]: I1125 20:51:37.973748 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.029842 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.043824 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.061115 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.069629 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.092467 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.093989 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.103073 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.104059 4983 scope.go:117] "RemoveContainer" containerID="2094fd3153577c010182a75fe0f6a1565cc331ae525fd1287db5df2a1c4ad611" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.141363 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.158033 4983 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.167973 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.211856 4983 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.230234 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.230302 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.230897 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.243591 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.248477 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.264772 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.264746409 podStartE2EDuration="22.264746409s" podCreationTimestamp="2025-11-25 20:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 20:51:38.252374112 +0000 UTC m=+1479.364907534" watchObservedRunningTime="2025-11-25 20:51:38.264746409 +0000 UTC m=+1479.377279801" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.297028 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.319461 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.322367 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.334832 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.366117 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.386314 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.386626 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.436009 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.452094 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lnrps" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.506760 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-n5qk7" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.573362 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" event={"ID":"48b3567f-5b1a-4f14-891c-775c05e2d768","Type":"ContainerStarted","Data":"ee5d69ba711aba5783270a434570e22f8e9a0b9928369aaf80d91b93c8802803"} Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.573910 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.576125 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" event={"ID":"0d3d657c-e179-43c7-abca-c37f8396d1cd","Type":"ContainerStarted","Data":"63534abb77c30c714dc8bb99ba9bd7d791cd097a3357aaade4e58144c0ad7727"} Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.576515 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.578032 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" event={"ID":"f32095da-1fdc-4d52-b082-98b39652cdc6","Type":"ContainerStarted","Data":"ad1b0b16ca8a67e7a6518725f8b30492d1b6064875b345e13f8963c6274f9450"} Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.578630 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.581369 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" event={"ID":"d7302bdd-d74f-4d95-a354-42fcd52bf22e","Type":"ContainerStarted","Data":"3933a0b6c5bfaec25ce07aca5ac352884fe1e92e9578fb0133d931fed33293f0"} Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.582269 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.589400 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" event={"ID":"cf765330-a0f9-4603-a92b-4aec8feaeafb","Type":"ContainerStarted","Data":"bb5669de8a0bc626de3c3cd22660b64a9fe178b7db1328e02e4c139cc901d4d0"} Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.590691 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.660940 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.673715 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.681957 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.697395 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.718386 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.740124 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.741009 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.755588 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.758099 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.823470 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.839973 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.874544 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6c5kw" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.921120 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.989616 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 20:51:38 crc kubenswrapper[4983]: I1125 20:51:38.990145 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.012307 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.030573 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gkvqg" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.037908 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.039972 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7mg2v" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.079185 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.119735 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-t8tlz" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.191147 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.236424 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.239435 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.251253 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.307385 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.313053 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.403475 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.404992 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.435374 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.475221 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.475328 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.504982 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.540278 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.611770 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.701953 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.727117 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.763606 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.786499 4983 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.797977 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.815857 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.823780 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.884953 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-szl5n" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.953435 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.953523 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.953651 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.954585 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e12df31370c6ce33dc30cef4c0a5235025ed26a0ae83ddc51872ed125d9d82bb"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.954668 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://e12df31370c6ce33dc30cef4c0a5235025ed26a0ae83ddc51872ed125d9d82bb" gracePeriod=600 Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.957822 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-frx5n" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.978784 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 20:51:39 crc kubenswrapper[4983]: I1125 20:51:39.991507 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.008949 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.076409 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.175475 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.179063 4983 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-297rv" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.280011 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.290129 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.321548 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.359634 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.382058 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.400898 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.425481 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.514443 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.565645 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.594832 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.620957 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="e12df31370c6ce33dc30cef4c0a5235025ed26a0ae83ddc51872ed125d9d82bb" exitCode=0 Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.621031 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"e12df31370c6ce33dc30cef4c0a5235025ed26a0ae83ddc51872ed125d9d82bb"} Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.621090 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574"} Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.621131 4983 scope.go:117] "RemoveContainer" containerID="564c2d7b04bb43d995119a30a67c66d1a1f25eab8467f75e61575755980ee6c6" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.640731 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.657266 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r8jjl" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.699294 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.734708 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.737233 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.782584 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.818080 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.843090 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.887587 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.898402 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.905907 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.930005 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 20:51:40 crc kubenswrapper[4983]: I1125 20:51:40.998351 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.069850 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.087260 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.088264 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.116600 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.132620 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.160934 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.201534 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.259286 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.286902 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.287744 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.319148 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.340792 4983 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.406066 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.428359 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.438327 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bs7sb" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.443727 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m68cm" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.446473 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.550169 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.599924 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.609951 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.621142 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.652478 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.656511 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-nf6tq" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.668273 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.694235 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-lzn84" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.696546 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.747228 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-xvxp7" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.794211 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-cctnq" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.846233 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.859071 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.903289 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.904165 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.929889 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.930947 4983 scope.go:117] "RemoveContainer" containerID="06f16a26be06d8dbdf08ca4719bb65ac69f57eaeb25aa09f922b73535ad349ee" Nov 25 20:51:41 crc kubenswrapper[4983]: E1125 20:51:41.931196 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-67cb4dc6d4-9zpxb_openstack-operators(e1668e7f-55bb-415c-b378-1c70483b30a6)\"" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" podUID="e1668e7f-55bb-415c-b378-1c70483b30a6" Nov 25 20:51:41 crc kubenswrapper[4983]: I1125 20:51:41.967139 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.064347 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-fchv4" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.068147 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-rwkrr" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.079000 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-f8bh4" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.104177 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.105905 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-tnfqx" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.117466 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-ljpb8" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.160200 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-dj7nt" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.163388 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.164365 4983 scope.go:117] "RemoveContainer" containerID="a3ec9be40fd0d325e5acc86618d6fa17481fbe10e7e86fb36211980b1bbd3c1c" Nov 25 20:51:42 crc kubenswrapper[4983]: E1125 20:51:42.164660 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-64cdc6ff96-p8q9g_openstack-operators(a096f840-35b3-48c1-8c0e-762b67b8bde0)\"" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" podUID="a096f840-35b3-48c1-8c0e-762b67b8bde0" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.175144 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.198794 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.248290 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-zc5rq" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.260789 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-mhjtj" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.299682 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b7bb74d9f-m9bbx" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.302379 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-4c95t" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.381326 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.383276 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.394072 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.426193 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-rpfhz" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.471018 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.495583 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.525172 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.525822 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.546917 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 20:51:42 crc kubenswrapper[4983]: I1125 20:51:42.976958 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.106106 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.151442 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4z5vp" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.173724 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.246419 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.398838 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.427227 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hq8ls" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.520076 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 20:51:43 crc kubenswrapper[4983]: I1125 20:51:43.720803 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 20:51:44 crc kubenswrapper[4983]: I1125 20:51:44.895950 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 20:51:45 crc kubenswrapper[4983]: I1125 20:51:45.606224 4983 scope.go:117] "RemoveContainer" containerID="bc5f14a088776850f3047677fee5b4b01b60dbcb5e1258a9928c672d1fd76bd8" Nov 25 20:51:45 crc kubenswrapper[4983]: E1125 20:51:45.606427 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-6dcc87d69d-p8fwj_metallb-system(74baeb7c-21f0-4d1c-9a61-7694f59cc161)\"" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" podUID="74baeb7c-21f0-4d1c-9a61-7694f59cc161" Nov 25 20:51:47 crc kubenswrapper[4983]: I1125 20:51:47.505310 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-qlm9k" Nov 25 20:51:48 crc kubenswrapper[4983]: I1125 20:51:48.117462 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5cf7cd9d4-bwfnd" Nov 25 20:51:49 crc kubenswrapper[4983]: I1125 20:51:49.470126 4983 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 20:51:49 crc kubenswrapper[4983]: I1125 20:51:49.471872 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://36f7b702466d9ac4d63fec5fbab59de4d51eac151470978fab039ed8b9e8c760" gracePeriod=5 Nov 25 20:51:51 crc kubenswrapper[4983]: I1125 20:51:51.677212 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-b9lnt" Nov 25 20:51:51 crc kubenswrapper[4983]: I1125 20:51:51.769845 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-t5knb" Nov 25 20:51:51 crc kubenswrapper[4983]: I1125 20:51:51.827113 4983 generic.go:334] "Generic (PLEG): container finished" podID="96bb1f23-94d5-4a68-995b-da2394c75158" containerID="e455a7e7108c703f76a586808b0c193129631df4c8605c0f0ccf9f14a3d36282" exitCode=0 Nov 25 20:51:51 crc kubenswrapper[4983]: I1125 20:51:51.827174 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" event={"ID":"96bb1f23-94d5-4a68-995b-da2394c75158","Type":"ContainerDied","Data":"e455a7e7108c703f76a586808b0c193129631df4c8605c0f0ccf9f14a3d36282"} Nov 25 20:51:52 crc kubenswrapper[4983]: I1125 20:51:52.606132 4983 scope.go:117] "RemoveContainer" containerID="06f16a26be06d8dbdf08ca4719bb65ac69f57eaeb25aa09f922b73535ad349ee" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.277662 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.420312 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm5z4\" (UniqueName: \"kubernetes.io/projected/96bb1f23-94d5-4a68-995b-da2394c75158-kube-api-access-qm5z4\") pod \"96bb1f23-94d5-4a68-995b-da2394c75158\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.420389 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-ssh-key\") pod \"96bb1f23-94d5-4a68-995b-da2394c75158\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.420489 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-bootstrap-combined-ca-bundle\") pod \"96bb1f23-94d5-4a68-995b-da2394c75158\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.420591 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-inventory\") pod \"96bb1f23-94d5-4a68-995b-da2394c75158\" (UID: \"96bb1f23-94d5-4a68-995b-da2394c75158\") " Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.431590 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "96bb1f23-94d5-4a68-995b-da2394c75158" (UID: "96bb1f23-94d5-4a68-995b-da2394c75158"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.440131 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bb1f23-94d5-4a68-995b-da2394c75158-kube-api-access-qm5z4" (OuterVolumeSpecName: "kube-api-access-qm5z4") pod "96bb1f23-94d5-4a68-995b-da2394c75158" (UID: "96bb1f23-94d5-4a68-995b-da2394c75158"). InnerVolumeSpecName "kube-api-access-qm5z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.453074 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96bb1f23-94d5-4a68-995b-da2394c75158" (UID: "96bb1f23-94d5-4a68-995b-da2394c75158"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.470792 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-inventory" (OuterVolumeSpecName: "inventory") pod "96bb1f23-94d5-4a68-995b-da2394c75158" (UID: "96bb1f23-94d5-4a68-995b-da2394c75158"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.523685 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm5z4\" (UniqueName: \"kubernetes.io/projected/96bb1f23-94d5-4a68-995b-da2394c75158-kube-api-access-qm5z4\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.523722 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.523740 4983 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.523756 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96bb1f23-94d5-4a68-995b-da2394c75158-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.883333 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" event={"ID":"96bb1f23-94d5-4a68-995b-da2394c75158","Type":"ContainerDied","Data":"161ded70eff042a068c8bab4e63a24f12667790d0f371edc45f3815bfc6c0726"} Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.883451 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="161ded70eff042a068c8bab4e63a24f12667790d0f371edc45f3815bfc6c0726" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.883644 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h" Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.889464 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" event={"ID":"e1668e7f-55bb-415c-b378-1c70483b30a6","Type":"ContainerStarted","Data":"6b0361249fea19b5a0506aa6200f28228d319e4560518b5e117537e8d506b93d"} Nov 25 20:51:53 crc kubenswrapper[4983]: I1125 20:51:53.890297 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:51:54 crc kubenswrapper[4983]: I1125 20:51:54.911335 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 20:51:54 crc kubenswrapper[4983]: I1125 20:51:54.911419 4983 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="36f7b702466d9ac4d63fec5fbab59de4d51eac151470978fab039ed8b9e8c760" exitCode=137 Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.185023 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.185126 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.269350 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.269445 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.269487 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.269711 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.269776 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.269970 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.270629 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.270675 4983 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.270678 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.270789 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.279152 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.374375 4983 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.374424 4983 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.374494 4983 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.374512 4983 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.605490 4983 scope.go:117] "RemoveContainer" containerID="a3ec9be40fd0d325e5acc86618d6fa17481fbe10e7e86fb36211980b1bbd3c1c" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.631748 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.932625 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" event={"ID":"a096f840-35b3-48c1-8c0e-762b67b8bde0","Type":"ContainerStarted","Data":"8f5cfac7e235b3ec5f5ef6688df1f8d0ce11d8470dec8132a5477df3c5f53248"} Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.934842 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.939118 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.939213 4983 scope.go:117] "RemoveContainer" containerID="36f7b702466d9ac4d63fec5fbab59de4d51eac151470978fab039ed8b9e8c760" Nov 25 20:51:55 crc kubenswrapper[4983]: I1125 20:51:55.939336 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 20:51:59 crc kubenswrapper[4983]: I1125 20:51:59.619631 4983 scope.go:117] "RemoveContainer" containerID="bc5f14a088776850f3047677fee5b4b01b60dbcb5e1258a9928c672d1fd76bd8" Nov 25 20:52:00 crc kubenswrapper[4983]: I1125 20:52:00.002855 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" event={"ID":"74baeb7c-21f0-4d1c-9a61-7694f59cc161","Type":"ContainerStarted","Data":"f55b58ceb8d7b92eb8cbfe278afae8377191363eff127c00830294041361ecaf"} Nov 25 20:52:00 crc kubenswrapper[4983]: I1125 20:52:00.003607 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:52:01 crc kubenswrapper[4983]: I1125 20:52:01.933922 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-9zpxb" Nov 25 20:52:02 crc kubenswrapper[4983]: I1125 20:52:02.178723 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-p8q9g" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.637804 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pdvtw"] Nov 25 20:52:06 crc kubenswrapper[4983]: E1125 20:52:06.639005 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bb1f23-94d5-4a68-995b-da2394c75158" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.639028 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bb1f23-94d5-4a68-995b-da2394c75158" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 20:52:06 crc kubenswrapper[4983]: E1125 20:52:06.639079 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" containerName="installer" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.639087 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" containerName="installer" Nov 25 20:52:06 crc kubenswrapper[4983]: E1125 20:52:06.639099 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.639107 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.639375 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.639387 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2745ce-1570-4841-8110-1249c0f897e7" containerName="installer" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.639417 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bb1f23-94d5-4a68-995b-da2394c75158" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.641278 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.657432 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdvtw"] Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.768785 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm7sr\" (UniqueName: \"kubernetes.io/projected/644f6e59-3cda-499d-bb0f-f75730d24ebd-kube-api-access-pm7sr\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.769506 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-catalog-content\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.769568 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-utilities\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.872445 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-catalog-content\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.872516 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-utilities\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.872580 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm7sr\" (UniqueName: \"kubernetes.io/projected/644f6e59-3cda-499d-bb0f-f75730d24ebd-kube-api-access-pm7sr\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.873195 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-catalog-content\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.873259 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-utilities\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.892872 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm7sr\" (UniqueName: \"kubernetes.io/projected/644f6e59-3cda-499d-bb0f-f75730d24ebd-kube-api-access-pm7sr\") pod \"redhat-operators-pdvtw\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:06 crc kubenswrapper[4983]: I1125 20:52:06.968750 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:07 crc kubenswrapper[4983]: I1125 20:52:07.540123 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pdvtw"] Nov 25 20:52:08 crc kubenswrapper[4983]: I1125 20:52:08.129473 4983 generic.go:334] "Generic (PLEG): container finished" podID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerID="acfba67c234446e29cbac2c903922d5ca9e3f7b4d2475dcea369affbe2a24e1b" exitCode=0 Nov 25 20:52:08 crc kubenswrapper[4983]: I1125 20:52:08.129903 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerDied","Data":"acfba67c234446e29cbac2c903922d5ca9e3f7b4d2475dcea369affbe2a24e1b"} Nov 25 20:52:08 crc kubenswrapper[4983]: I1125 20:52:08.129938 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerStarted","Data":"703af15a6a4a65d725c8cb968d2ad107c036d51e51a440af2ca9a5641d414386"} Nov 25 20:52:08 crc kubenswrapper[4983]: I1125 20:52:08.974632 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7khpl"] Nov 25 20:52:08 crc kubenswrapper[4983]: I1125 20:52:08.977795 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:08 crc kubenswrapper[4983]: I1125 20:52:08.989268 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7khpl"] Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.045428 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21634d0b-fbfc-409b-9ab9-9590fc78e410-catalog-content\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.045491 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21634d0b-fbfc-409b-9ab9-9590fc78e410-utilities\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.045628 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgs9\" (UniqueName: \"kubernetes.io/projected/21634d0b-fbfc-409b-9ab9-9590fc78e410-kube-api-access-rrgs9\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.142470 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerStarted","Data":"4efe00d53918e4203ac01095e97305bd2e2be9a54e367fbf66210f3d557ecf06"} Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.148266 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21634d0b-fbfc-409b-9ab9-9590fc78e410-catalog-content\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.148376 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21634d0b-fbfc-409b-9ab9-9590fc78e410-utilities\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.148530 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrgs9\" (UniqueName: \"kubernetes.io/projected/21634d0b-fbfc-409b-9ab9-9590fc78e410-kube-api-access-rrgs9\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.149258 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21634d0b-fbfc-409b-9ab9-9590fc78e410-catalog-content\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.149266 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21634d0b-fbfc-409b-9ab9-9590fc78e410-utilities\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.168520 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrgs9\" (UniqueName: \"kubernetes.io/projected/21634d0b-fbfc-409b-9ab9-9590fc78e410-kube-api-access-rrgs9\") pod \"certified-operators-7khpl\" (UID: \"21634d0b-fbfc-409b-9ab9-9590fc78e410\") " pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.332954 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:09 crc kubenswrapper[4983]: I1125 20:52:09.936700 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7khpl"] Nov 25 20:52:09 crc kubenswrapper[4983]: W1125 20:52:09.939156 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21634d0b_fbfc_409b_9ab9_9590fc78e410.slice/crio-0a2e16e22913a6fce01d314fd9414d28ddc33e646846fa3fa44dbdd5b4c2db00 WatchSource:0}: Error finding container 0a2e16e22913a6fce01d314fd9414d28ddc33e646846fa3fa44dbdd5b4c2db00: Status 404 returned error can't find the container with id 0a2e16e22913a6fce01d314fd9414d28ddc33e646846fa3fa44dbdd5b4c2db00 Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.153182 4983 generic.go:334] "Generic (PLEG): container finished" podID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerID="4efe00d53918e4203ac01095e97305bd2e2be9a54e367fbf66210f3d557ecf06" exitCode=0 Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.153251 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerDied","Data":"4efe00d53918e4203ac01095e97305bd2e2be9a54e367fbf66210f3d557ecf06"} Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.155411 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7khpl" event={"ID":"21634d0b-fbfc-409b-9ab9-9590fc78e410","Type":"ContainerStarted","Data":"5f3ac5c8edb55dc49eaafbc84065dddae075d7d0d72f64e09e00ef6384047eb4"} Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.155450 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7khpl" event={"ID":"21634d0b-fbfc-409b-9ab9-9590fc78e410","Type":"ContainerStarted","Data":"0a2e16e22913a6fce01d314fd9414d28ddc33e646846fa3fa44dbdd5b4c2db00"} Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.778621 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rn5nn"] Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.781783 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.802379 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rn5nn"] Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.896410 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-utilities\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.896731 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-catalog-content\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.896760 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5ct\" (UniqueName: \"kubernetes.io/projected/69679ac7-ad44-4ea1-a9f4-5d7257108543-kube-api-access-4w5ct\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.998615 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-catalog-content\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.998682 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5ct\" (UniqueName: \"kubernetes.io/projected/69679ac7-ad44-4ea1-a9f4-5d7257108543-kube-api-access-4w5ct\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.998718 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-utilities\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.999289 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-catalog-content\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:10 crc kubenswrapper[4983]: I1125 20:52:10.999446 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-utilities\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.020690 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5ct\" (UniqueName: \"kubernetes.io/projected/69679ac7-ad44-4ea1-a9f4-5d7257108543-kube-api-access-4w5ct\") pod \"redhat-marketplace-rn5nn\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.097966 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.180303 4983 generic.go:334] "Generic (PLEG): container finished" podID="21634d0b-fbfc-409b-9ab9-9590fc78e410" containerID="5f3ac5c8edb55dc49eaafbc84065dddae075d7d0d72f64e09e00ef6384047eb4" exitCode=0 Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.180472 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7khpl" event={"ID":"21634d0b-fbfc-409b-9ab9-9590fc78e410","Type":"ContainerDied","Data":"5f3ac5c8edb55dc49eaafbc84065dddae075d7d0d72f64e09e00ef6384047eb4"} Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.200119 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerStarted","Data":"8f644396cbf74697a2245eedbf6377e4780aa9c13ec643428ecb3acd8e8c7079"} Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.267071 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pdvtw" podStartSLOduration=2.7878608099999997 podStartE2EDuration="5.267042924s" podCreationTimestamp="2025-11-25 20:52:06 +0000 UTC" firstStartedPulling="2025-11-25 20:52:08.131809997 +0000 UTC m=+1509.244343389" lastFinishedPulling="2025-11-25 20:52:10.610992111 +0000 UTC m=+1511.723525503" observedRunningTime="2025-11-25 20:52:11.247896618 +0000 UTC m=+1512.360430010" watchObservedRunningTime="2025-11-25 20:52:11.267042924 +0000 UTC m=+1512.379576316" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.677332 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rn5nn"] Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.792435 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-572gk"] Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.798938 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.818513 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-572gk"] Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.833011 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5cz5\" (UniqueName: \"kubernetes.io/projected/113cf747-b33d-4ed6-8846-055134ba5779-kube-api-access-q5cz5\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.833154 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-utilities\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.833233 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-catalog-content\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.935806 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5cz5\" (UniqueName: \"kubernetes.io/projected/113cf747-b33d-4ed6-8846-055134ba5779-kube-api-access-q5cz5\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.935897 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-utilities\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.935973 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-catalog-content\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.936522 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-catalog-content\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.936651 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-utilities\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:11 crc kubenswrapper[4983]: I1125 20:52:11.966839 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5cz5\" (UniqueName: \"kubernetes.io/projected/113cf747-b33d-4ed6-8846-055134ba5779-kube-api-access-q5cz5\") pod \"redhat-operators-572gk\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:12 crc kubenswrapper[4983]: I1125 20:52:12.198262 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:12 crc kubenswrapper[4983]: I1125 20:52:12.249823 4983 generic.go:334] "Generic (PLEG): container finished" podID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerID="21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58" exitCode=0 Nov 25 20:52:12 crc kubenswrapper[4983]: I1125 20:52:12.251733 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rn5nn" event={"ID":"69679ac7-ad44-4ea1-a9f4-5d7257108543","Type":"ContainerDied","Data":"21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58"} Nov 25 20:52:12 crc kubenswrapper[4983]: I1125 20:52:12.251771 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rn5nn" event={"ID":"69679ac7-ad44-4ea1-a9f4-5d7257108543","Type":"ContainerStarted","Data":"42ae418b653d01112345dfb5ccb43f329bcb493a1c0d7895c477c0e124ca5bd2"} Nov 25 20:52:12 crc kubenswrapper[4983]: W1125 20:52:12.815989 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod113cf747_b33d_4ed6_8846_055134ba5779.slice/crio-64fc3056660ff5eb2de5093ed672a3794b0f597589cb0b1827bda976ee812b89 WatchSource:0}: Error finding container 64fc3056660ff5eb2de5093ed672a3794b0f597589cb0b1827bda976ee812b89: Status 404 returned error can't find the container with id 64fc3056660ff5eb2de5093ed672a3794b0f597589cb0b1827bda976ee812b89 Nov 25 20:52:12 crc kubenswrapper[4983]: I1125 20:52:12.819987 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-572gk"] Nov 25 20:52:13 crc kubenswrapper[4983]: I1125 20:52:13.266299 4983 generic.go:334] "Generic (PLEG): container finished" podID="113cf747-b33d-4ed6-8846-055134ba5779" containerID="edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3" exitCode=0 Nov 25 20:52:13 crc kubenswrapper[4983]: I1125 20:52:13.266485 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerDied","Data":"edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3"} Nov 25 20:52:13 crc kubenswrapper[4983]: I1125 20:52:13.266766 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerStarted","Data":"64fc3056660ff5eb2de5093ed672a3794b0f597589cb0b1827bda976ee812b89"} Nov 25 20:52:14 crc kubenswrapper[4983]: I1125 20:52:14.282275 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerStarted","Data":"0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b"} Nov 25 20:52:14 crc kubenswrapper[4983]: I1125 20:52:14.285789 4983 generic.go:334] "Generic (PLEG): container finished" podID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerID="c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164" exitCode=0 Nov 25 20:52:14 crc kubenswrapper[4983]: I1125 20:52:14.285848 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rn5nn" event={"ID":"69679ac7-ad44-4ea1-a9f4-5d7257108543","Type":"ContainerDied","Data":"c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164"} Nov 25 20:52:15 crc kubenswrapper[4983]: I1125 20:52:15.320283 4983 generic.go:334] "Generic (PLEG): container finished" podID="113cf747-b33d-4ed6-8846-055134ba5779" containerID="0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b" exitCode=0 Nov 25 20:52:15 crc kubenswrapper[4983]: I1125 20:52:15.320357 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerDied","Data":"0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b"} Nov 25 20:52:15 crc kubenswrapper[4983]: I1125 20:52:15.339440 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rn5nn" event={"ID":"69679ac7-ad44-4ea1-a9f4-5d7257108543","Type":"ContainerStarted","Data":"f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd"} Nov 25 20:52:15 crc kubenswrapper[4983]: I1125 20:52:15.376347 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rn5nn" podStartSLOduration=2.911749284 podStartE2EDuration="5.376328521s" podCreationTimestamp="2025-11-25 20:52:10 +0000 UTC" firstStartedPulling="2025-11-25 20:52:12.263641611 +0000 UTC m=+1513.376174993" lastFinishedPulling="2025-11-25 20:52:14.728220838 +0000 UTC m=+1515.840754230" observedRunningTime="2025-11-25 20:52:15.372153501 +0000 UTC m=+1516.484686893" watchObservedRunningTime="2025-11-25 20:52:15.376328521 +0000 UTC m=+1516.488861913" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.023504 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q"] Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.025445 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.031868 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.032080 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.032191 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.032273 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.045657 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q"] Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.055994 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.056053 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hq5\" (UniqueName: \"kubernetes.io/projected/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-kube-api-access-v4hq5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.056125 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.158830 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.158890 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4hq5\" (UniqueName: \"kubernetes.io/projected/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-kube-api-access-v4hq5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.159488 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.186605 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.189251 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4hq5\" (UniqueName: \"kubernetes.io/projected/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-kube-api-access-v4hq5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.202337 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.343013 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.360071 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerStarted","Data":"ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8"} Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.385894 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-572gk" podStartSLOduration=2.910880336 podStartE2EDuration="5.38587619s" podCreationTimestamp="2025-11-25 20:52:11 +0000 UTC" firstStartedPulling="2025-11-25 20:52:13.269279556 +0000 UTC m=+1514.381812948" lastFinishedPulling="2025-11-25 20:52:15.74427541 +0000 UTC m=+1516.856808802" observedRunningTime="2025-11-25 20:52:16.382954953 +0000 UTC m=+1517.495488345" watchObservedRunningTime="2025-11-25 20:52:16.38587619 +0000 UTC m=+1517.498409582" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.969046 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:16 crc kubenswrapper[4983]: I1125 20:52:16.969409 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:17 crc kubenswrapper[4983]: I1125 20:52:17.040135 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q"] Nov 25 20:52:17 crc kubenswrapper[4983]: I1125 20:52:17.372328 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" event={"ID":"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32","Type":"ContainerStarted","Data":"210f55ede6649011a0ef4509f80ec82c08339301f3ec2725b4adce4e07cc14c5"} Nov 25 20:52:18 crc kubenswrapper[4983]: I1125 20:52:18.041739 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pdvtw" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="registry-server" probeResult="failure" output=< Nov 25 20:52:18 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 20:52:18 crc kubenswrapper[4983]: > Nov 25 20:52:18 crc kubenswrapper[4983]: I1125 20:52:18.395091 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" event={"ID":"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32","Type":"ContainerStarted","Data":"4c2d83f03b739a8bc371534c12e59e05b6dbc48d268c6b5dc10c6b0c5ac0201a"} Nov 25 20:52:18 crc kubenswrapper[4983]: I1125 20:52:18.448511 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" podStartSLOduration=2.91953125 podStartE2EDuration="3.448484269s" podCreationTimestamp="2025-11-25 20:52:15 +0000 UTC" firstStartedPulling="2025-11-25 20:52:17.060056413 +0000 UTC m=+1518.172589805" lastFinishedPulling="2025-11-25 20:52:17.589009432 +0000 UTC m=+1518.701542824" observedRunningTime="2025-11-25 20:52:18.429655161 +0000 UTC m=+1519.542188553" watchObservedRunningTime="2025-11-25 20:52:18.448484269 +0000 UTC m=+1519.561017661" Nov 25 20:52:19 crc kubenswrapper[4983]: I1125 20:52:19.574197 4983 scope.go:117] "RemoveContainer" containerID="e5a383900a489bd447ca33be85c057bbf0a472365e2dd94d8a88f84d1f177e7d" Nov 25 20:52:21 crc kubenswrapper[4983]: I1125 20:52:21.098249 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:21 crc kubenswrapper[4983]: I1125 20:52:21.098802 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:21 crc kubenswrapper[4983]: I1125 20:52:21.161021 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:21 crc kubenswrapper[4983]: I1125 20:52:21.497667 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:22 crc kubenswrapper[4983]: I1125 20:52:22.130084 4983 scope.go:117] "RemoveContainer" containerID="3d511550f6f50f21762d0a13d5eab8144b2a675eeaa76f06b8b92c3cecf1f47d" Nov 25 20:52:22 crc kubenswrapper[4983]: I1125 20:52:22.199401 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:22 crc kubenswrapper[4983]: I1125 20:52:22.199967 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:22 crc kubenswrapper[4983]: I1125 20:52:22.214699 4983 scope.go:117] "RemoveContainer" containerID="6f003e148d3b89a68be1e63708eb080536ae2caf1c25d82591c53535eb754912" Nov 25 20:52:23 crc kubenswrapper[4983]: I1125 20:52:23.288306 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-572gk" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="registry-server" probeResult="failure" output=< Nov 25 20:52:23 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 20:52:23 crc kubenswrapper[4983]: > Nov 25 20:52:23 crc kubenswrapper[4983]: I1125 20:52:23.450235 4983 generic.go:334] "Generic (PLEG): container finished" podID="21634d0b-fbfc-409b-9ab9-9590fc78e410" containerID="32e984c23b9dd6be35eafc2af5743d2c8c0c7776b3db6a92b640af99287d6965" exitCode=0 Nov 25 20:52:23 crc kubenswrapper[4983]: I1125 20:52:23.450450 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7khpl" event={"ID":"21634d0b-fbfc-409b-9ab9-9590fc78e410","Type":"ContainerDied","Data":"32e984c23b9dd6be35eafc2af5743d2c8c0c7776b3db6a92b640af99287d6965"} Nov 25 20:52:24 crc kubenswrapper[4983]: I1125 20:52:24.466934 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7khpl" event={"ID":"21634d0b-fbfc-409b-9ab9-9590fc78e410","Type":"ContainerStarted","Data":"1c559b96515a2930a29d61f5623a9f5a33b1f1473dfaf4fb70ed09b4e09b7cb0"} Nov 25 20:52:24 crc kubenswrapper[4983]: I1125 20:52:24.505929 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7khpl" podStartSLOduration=3.807053476 podStartE2EDuration="16.505898786s" podCreationTimestamp="2025-11-25 20:52:08 +0000 UTC" firstStartedPulling="2025-11-25 20:52:11.185995939 +0000 UTC m=+1512.298529331" lastFinishedPulling="2025-11-25 20:52:23.884841229 +0000 UTC m=+1524.997374641" observedRunningTime="2025-11-25 20:52:24.492431449 +0000 UTC m=+1525.604964841" watchObservedRunningTime="2025-11-25 20:52:24.505898786 +0000 UTC m=+1525.618432178" Nov 25 20:52:27 crc kubenswrapper[4983]: I1125 20:52:27.048979 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:27 crc kubenswrapper[4983]: I1125 20:52:27.115490 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:29 crc kubenswrapper[4983]: I1125 20:52:29.334396 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:29 crc kubenswrapper[4983]: I1125 20:52:29.334820 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:29 crc kubenswrapper[4983]: I1125 20:52:29.422611 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:29 crc kubenswrapper[4983]: I1125 20:52:29.568419 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7khpl" Nov 25 20:52:32 crc kubenswrapper[4983]: I1125 20:52:32.335321 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:32 crc kubenswrapper[4983]: I1125 20:52:32.424013 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:35 crc kubenswrapper[4983]: I1125 20:52:35.208875 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6dcc87d69d-p8fwj" Nov 25 20:52:37 crc kubenswrapper[4983]: I1125 20:52:37.104797 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7khpl"] Nov 25 20:52:37 crc kubenswrapper[4983]: I1125 20:52:37.444058 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5wpf"] Nov 25 20:52:37 crc kubenswrapper[4983]: I1125 20:52:37.444393 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x5wpf" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="registry-server" containerID="cri-o://206f56354a19d84ebcf30f171dc4bd6b47855f01cfe07c36a1943c5ca9d5539d" gracePeriod=2 Nov 25 20:52:37 crc kubenswrapper[4983]: I1125 20:52:37.663696 4983 generic.go:334] "Generic (PLEG): container finished" podID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerID="206f56354a19d84ebcf30f171dc4bd6b47855f01cfe07c36a1943c5ca9d5539d" exitCode=0 Nov 25 20:52:37 crc kubenswrapper[4983]: I1125 20:52:37.663776 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerDied","Data":"206f56354a19d84ebcf30f171dc4bd6b47855f01cfe07c36a1943c5ca9d5539d"} Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.014358 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.122793 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-utilities\") pod \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.123186 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-catalog-content\") pod \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.123335 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wsk5\" (UniqueName: \"kubernetes.io/projected/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-kube-api-access-5wsk5\") pod \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\" (UID: \"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2\") " Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.123511 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-utilities" (OuterVolumeSpecName: "utilities") pod "fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" (UID: "fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.123827 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.154787 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-kube-api-access-5wsk5" (OuterVolumeSpecName: "kube-api-access-5wsk5") pod "fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" (UID: "fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2"). InnerVolumeSpecName "kube-api-access-5wsk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.188479 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" (UID: "fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.226055 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wsk5\" (UniqueName: \"kubernetes.io/projected/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-kube-api-access-5wsk5\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.226101 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.691448 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x5wpf" event={"ID":"fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2","Type":"ContainerDied","Data":"82bab1276357d3905b64468feaa92a4b1a421a6c265c85c7b997fa0ca9fdb577"} Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.691515 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x5wpf" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.691549 4983 scope.go:117] "RemoveContainer" containerID="206f56354a19d84ebcf30f171dc4bd6b47855f01cfe07c36a1943c5ca9d5539d" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.738317 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x5wpf"] Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.738740 4983 scope.go:117] "RemoveContainer" containerID="11c5b08c2145e30606b803006d6683ac773d40779612143c9c4b6995b4c4b3ae" Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.759911 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x5wpf"] Nov 25 20:52:38 crc kubenswrapper[4983]: I1125 20:52:38.783381 4983 scope.go:117] "RemoveContainer" containerID="aa334b484199098ed473d005468e1f3cc78c91a2c59727b82134b43e065bbbbd" Nov 25 20:52:39 crc kubenswrapper[4983]: I1125 20:52:39.617424 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" path="/var/lib/kubelet/pods/fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2/volumes" Nov 25 20:52:39 crc kubenswrapper[4983]: I1125 20:52:39.641415 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rn5nn"] Nov 25 20:52:39 crc kubenswrapper[4983]: I1125 20:52:39.641819 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rn5nn" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="registry-server" containerID="cri-o://f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd" gracePeriod=2 Nov 25 20:52:39 crc kubenswrapper[4983]: I1125 20:52:39.840813 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-572gk"] Nov 25 20:52:39 crc kubenswrapper[4983]: I1125 20:52:39.841531 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-572gk" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="registry-server" containerID="cri-o://ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8" gracePeriod=2 Nov 25 20:52:39 crc kubenswrapper[4983]: E1125 20:52:39.874861 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69679ac7_ad44_4ea1_a9f4_5d7257108543.slice/crio-conmon-f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.242880 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdvtw"] Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.243163 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pdvtw" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="registry-server" containerID="cri-o://8f644396cbf74697a2245eedbf6377e4780aa9c13ec643428ecb3acd8e8c7079" gracePeriod=2 Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.248314 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.375596 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w5ct\" (UniqueName: \"kubernetes.io/projected/69679ac7-ad44-4ea1-a9f4-5d7257108543-kube-api-access-4w5ct\") pod \"69679ac7-ad44-4ea1-a9f4-5d7257108543\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.376068 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-utilities\") pod \"69679ac7-ad44-4ea1-a9f4-5d7257108543\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.376188 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-catalog-content\") pod \"69679ac7-ad44-4ea1-a9f4-5d7257108543\" (UID: \"69679ac7-ad44-4ea1-a9f4-5d7257108543\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.376848 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-utilities" (OuterVolumeSpecName: "utilities") pod "69679ac7-ad44-4ea1-a9f4-5d7257108543" (UID: "69679ac7-ad44-4ea1-a9f4-5d7257108543"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.385622 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69679ac7-ad44-4ea1-a9f4-5d7257108543-kube-api-access-4w5ct" (OuterVolumeSpecName: "kube-api-access-4w5ct") pod "69679ac7-ad44-4ea1-a9f4-5d7257108543" (UID: "69679ac7-ad44-4ea1-a9f4-5d7257108543"). InnerVolumeSpecName "kube-api-access-4w5ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.395707 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69679ac7-ad44-4ea1-a9f4-5d7257108543" (UID: "69679ac7-ad44-4ea1-a9f4-5d7257108543"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.415244 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.479811 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.479850 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w5ct\" (UniqueName: \"kubernetes.io/projected/69679ac7-ad44-4ea1-a9f4-5d7257108543-kube-api-access-4w5ct\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.479882 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69679ac7-ad44-4ea1-a9f4-5d7257108543-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.581162 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-catalog-content\") pod \"113cf747-b33d-4ed6-8846-055134ba5779\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.581317 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-utilities\") pod \"113cf747-b33d-4ed6-8846-055134ba5779\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.581525 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5cz5\" (UniqueName: \"kubernetes.io/projected/113cf747-b33d-4ed6-8846-055134ba5779-kube-api-access-q5cz5\") pod \"113cf747-b33d-4ed6-8846-055134ba5779\" (UID: \"113cf747-b33d-4ed6-8846-055134ba5779\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.583328 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-utilities" (OuterVolumeSpecName: "utilities") pod "113cf747-b33d-4ed6-8846-055134ba5779" (UID: "113cf747-b33d-4ed6-8846-055134ba5779"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.587824 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113cf747-b33d-4ed6-8846-055134ba5779-kube-api-access-q5cz5" (OuterVolumeSpecName: "kube-api-access-q5cz5") pod "113cf747-b33d-4ed6-8846-055134ba5779" (UID: "113cf747-b33d-4ed6-8846-055134ba5779"). InnerVolumeSpecName "kube-api-access-q5cz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.668514 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "113cf747-b33d-4ed6-8846-055134ba5779" (UID: "113cf747-b33d-4ed6-8846-055134ba5779"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.684087 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.684125 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/113cf747-b33d-4ed6-8846-055134ba5779-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.684136 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5cz5\" (UniqueName: \"kubernetes.io/projected/113cf747-b33d-4ed6-8846-055134ba5779-kube-api-access-q5cz5\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.725138 4983 generic.go:334] "Generic (PLEG): container finished" podID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerID="8f644396cbf74697a2245eedbf6377e4780aa9c13ec643428ecb3acd8e8c7079" exitCode=0 Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.725662 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerDied","Data":"8f644396cbf74697a2245eedbf6377e4780aa9c13ec643428ecb3acd8e8c7079"} Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.738315 4983 generic.go:334] "Generic (PLEG): container finished" podID="113cf747-b33d-4ed6-8846-055134ba5779" containerID="ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8" exitCode=0 Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.738422 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-572gk" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.738444 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerDied","Data":"ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8"} Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.738529 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-572gk" event={"ID":"113cf747-b33d-4ed6-8846-055134ba5779","Type":"ContainerDied","Data":"64fc3056660ff5eb2de5093ed672a3794b0f597589cb0b1827bda976ee812b89"} Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.738580 4983 scope.go:117] "RemoveContainer" containerID="ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.746200 4983 generic.go:334] "Generic (PLEG): container finished" podID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerID="f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd" exitCode=0 Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.746271 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rn5nn" event={"ID":"69679ac7-ad44-4ea1-a9f4-5d7257108543","Type":"ContainerDied","Data":"f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd"} Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.746315 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rn5nn" event={"ID":"69679ac7-ad44-4ea1-a9f4-5d7257108543","Type":"ContainerDied","Data":"42ae418b653d01112345dfb5ccb43f329bcb493a1c0d7895c477c0e124ca5bd2"} Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.746435 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rn5nn" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.755104 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.783872 4983 scope.go:117] "RemoveContainer" containerID="0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.846206 4983 scope.go:117] "RemoveContainer" containerID="edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.900931 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-catalog-content\") pod \"644f6e59-3cda-499d-bb0f-f75730d24ebd\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.901023 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-utilities\") pod \"644f6e59-3cda-499d-bb0f-f75730d24ebd\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.901174 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm7sr\" (UniqueName: \"kubernetes.io/projected/644f6e59-3cda-499d-bb0f-f75730d24ebd-kube-api-access-pm7sr\") pod \"644f6e59-3cda-499d-bb0f-f75730d24ebd\" (UID: \"644f6e59-3cda-499d-bb0f-f75730d24ebd\") " Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.901665 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-utilities" (OuterVolumeSpecName: "utilities") pod "644f6e59-3cda-499d-bb0f-f75730d24ebd" (UID: "644f6e59-3cda-499d-bb0f-f75730d24ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.907313 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.911301 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/644f6e59-3cda-499d-bb0f-f75730d24ebd-kube-api-access-pm7sr" (OuterVolumeSpecName: "kube-api-access-pm7sr") pod "644f6e59-3cda-499d-bb0f-f75730d24ebd" (UID: "644f6e59-3cda-499d-bb0f-f75730d24ebd"). InnerVolumeSpecName "kube-api-access-pm7sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.913835 4983 scope.go:117] "RemoveContainer" containerID="ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8" Nov 25 20:52:40 crc kubenswrapper[4983]: E1125 20:52:40.916235 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8\": container with ID starting with ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8 not found: ID does not exist" containerID="ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.916304 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8"} err="failed to get container status \"ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8\": rpc error: code = NotFound desc = could not find container \"ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8\": container with ID starting with ae9245b1dfa687aeb15a9894a8b1a8be3eb8a9bbc597f4511e0c98b379d456d8 not found: ID does not exist" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.916331 4983 scope.go:117] "RemoveContainer" containerID="0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.917262 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-572gk"] Nov 25 20:52:40 crc kubenswrapper[4983]: E1125 20:52:40.921310 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b\": container with ID starting with 0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b not found: ID does not exist" containerID="0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.921343 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b"} err="failed to get container status \"0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b\": rpc error: code = NotFound desc = could not find container \"0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b\": container with ID starting with 0d467f560a6e352f783d83205483e57db5d34574f901d1dff3753cc3a878e30b not found: ID does not exist" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.921381 4983 scope.go:117] "RemoveContainer" containerID="edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3" Nov 25 20:52:40 crc kubenswrapper[4983]: E1125 20:52:40.922185 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3\": container with ID starting with edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3 not found: ID does not exist" containerID="edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.922242 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3"} err="failed to get container status \"edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3\": rpc error: code = NotFound desc = could not find container \"edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3\": container with ID starting with edf2a671b6a2493cbc3c89f8f6cd2a44e13cb04fd80fa226b93bec6dbd108ee3 not found: ID does not exist" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.922281 4983 scope.go:117] "RemoveContainer" containerID="f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd" Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.932084 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-572gk"] Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.941454 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rn5nn"] Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.950039 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rn5nn"] Nov 25 20:52:40 crc kubenswrapper[4983]: I1125 20:52:40.990845 4983 scope.go:117] "RemoveContainer" containerID="c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.010445 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm7sr\" (UniqueName: \"kubernetes.io/projected/644f6e59-3cda-499d-bb0f-f75730d24ebd-kube-api-access-pm7sr\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.010988 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "644f6e59-3cda-499d-bb0f-f75730d24ebd" (UID: "644f6e59-3cda-499d-bb0f-f75730d24ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.033160 4983 scope.go:117] "RemoveContainer" containerID="21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.065208 4983 scope.go:117] "RemoveContainer" containerID="f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd" Nov 25 20:52:41 crc kubenswrapper[4983]: E1125 20:52:41.065675 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd\": container with ID starting with f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd not found: ID does not exist" containerID="f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.065815 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd"} err="failed to get container status \"f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd\": rpc error: code = NotFound desc = could not find container \"f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd\": container with ID starting with f0ab43f6910062c022bfe34efd53954e9aee04766d8f2b5bdccdb724b1082fbd not found: ID does not exist" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.065854 4983 scope.go:117] "RemoveContainer" containerID="c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164" Nov 25 20:52:41 crc kubenswrapper[4983]: E1125 20:52:41.066043 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164\": container with ID starting with c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164 not found: ID does not exist" containerID="c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.066074 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164"} err="failed to get container status \"c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164\": rpc error: code = NotFound desc = could not find container \"c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164\": container with ID starting with c191054db26ea9b7c1d1878c864b4188c5c597bae8c91ca68476869c39dc4164 not found: ID does not exist" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.066091 4983 scope.go:117] "RemoveContainer" containerID="21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58" Nov 25 20:52:41 crc kubenswrapper[4983]: E1125 20:52:41.066483 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58\": container with ID starting with 21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58 not found: ID does not exist" containerID="21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.066530 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58"} err="failed to get container status \"21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58\": rpc error: code = NotFound desc = could not find container \"21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58\": container with ID starting with 21d01b8ccdd1522c9fab7474d7db8ba16c244e6278f99aaaf4dca5c2a1747f58 not found: ID does not exist" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.114116 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/644f6e59-3cda-499d-bb0f-f75730d24ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.617778 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113cf747-b33d-4ed6-8846-055134ba5779" path="/var/lib/kubelet/pods/113cf747-b33d-4ed6-8846-055134ba5779/volumes" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.619257 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" path="/var/lib/kubelet/pods/69679ac7-ad44-4ea1-a9f4-5d7257108543/volumes" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.760401 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pdvtw" event={"ID":"644f6e59-3cda-499d-bb0f-f75730d24ebd","Type":"ContainerDied","Data":"703af15a6a4a65d725c8cb968d2ad107c036d51e51a440af2ca9a5641d414386"} Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.760462 4983 scope.go:117] "RemoveContainer" containerID="8f644396cbf74697a2245eedbf6377e4780aa9c13ec643428ecb3acd8e8c7079" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.760590 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pdvtw" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.836420 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pdvtw"] Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.845642 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pdvtw"] Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.849200 4983 scope.go:117] "RemoveContainer" containerID="4efe00d53918e4203ac01095e97305bd2e2be9a54e367fbf66210f3d557ecf06" Nov 25 20:52:41 crc kubenswrapper[4983]: I1125 20:52:41.888139 4983 scope.go:117] "RemoveContainer" containerID="acfba67c234446e29cbac2c903922d5ca9e3f7b4d2475dcea369affbe2a24e1b" Nov 25 20:52:43 crc kubenswrapper[4983]: I1125 20:52:43.625145 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" path="/var/lib/kubelet/pods/644f6e59-3cda-499d-bb0f-f75730d24ebd/volumes" Nov 25 20:52:53 crc kubenswrapper[4983]: I1125 20:52:53.072809 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-89aa-account-create-update-gpfwm"] Nov 25 20:52:53 crc kubenswrapper[4983]: I1125 20:52:53.084960 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-89aa-account-create-update-gpfwm"] Nov 25 20:52:53 crc kubenswrapper[4983]: I1125 20:52:53.628729 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3cf6a7-c209-47b7-81a7-95e076a0e4ed" path="/var/lib/kubelet/pods/2c3cf6a7-c209-47b7-81a7-95e076a0e4ed/volumes" Nov 25 20:52:54 crc kubenswrapper[4983]: I1125 20:52:54.058964 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-fz9jq"] Nov 25 20:52:54 crc kubenswrapper[4983]: I1125 20:52:54.079881 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-s9cqc"] Nov 25 20:52:54 crc kubenswrapper[4983]: I1125 20:52:54.098029 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c89-account-create-update-5nbdz"] Nov 25 20:52:54 crc kubenswrapper[4983]: I1125 20:52:54.111355 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7c89-account-create-update-5nbdz"] Nov 25 20:52:54 crc kubenswrapper[4983]: I1125 20:52:54.123526 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-s9cqc"] Nov 25 20:52:54 crc kubenswrapper[4983]: I1125 20:52:54.134033 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-fz9jq"] Nov 25 20:52:55 crc kubenswrapper[4983]: I1125 20:52:55.039083 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c2ld6"] Nov 25 20:52:55 crc kubenswrapper[4983]: I1125 20:52:55.048310 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c2ld6"] Nov 25 20:52:55 crc kubenswrapper[4983]: I1125 20:52:55.627435 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c578b0-726b-457d-a2b4-a3582ea1704c" path="/var/lib/kubelet/pods/03c578b0-726b-457d-a2b4-a3582ea1704c/volumes" Nov 25 20:52:55 crc kubenswrapper[4983]: I1125 20:52:55.630930 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4e8454-9ea6-414b-83a4-6c8a16cf983e" path="/var/lib/kubelet/pods/4f4e8454-9ea6-414b-83a4-6c8a16cf983e/volumes" Nov 25 20:52:55 crc kubenswrapper[4983]: I1125 20:52:55.632250 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55267844-5a91-44d3-b4bc-6292f74eb7bb" path="/var/lib/kubelet/pods/55267844-5a91-44d3-b4bc-6292f74eb7bb/volumes" Nov 25 20:52:55 crc kubenswrapper[4983]: I1125 20:52:55.633654 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e1808b9-63ae-48e4-8516-a119424817b7" path="/var/lib/kubelet/pods/9e1808b9-63ae-48e4-8516-a119424817b7/volumes" Nov 25 20:52:58 crc kubenswrapper[4983]: I1125 20:52:58.043336 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8345-account-create-update-pvqgp"] Nov 25 20:52:58 crc kubenswrapper[4983]: I1125 20:52:58.057187 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8345-account-create-update-pvqgp"] Nov 25 20:52:59 crc kubenswrapper[4983]: I1125 20:52:59.623312 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb98ad7c-5111-4876-b5ca-4196c92b2cce" path="/var/lib/kubelet/pods/fb98ad7c-5111-4876-b5ca-4196c92b2cce/volumes" Nov 25 20:53:22 crc kubenswrapper[4983]: I1125 20:53:22.383102 4983 scope.go:117] "RemoveContainer" containerID="28e1253da13187ec01ee742a444816a50da8582c6d2bda3fc0e1b1fecc7abbc9" Nov 25 20:53:22 crc kubenswrapper[4983]: I1125 20:53:22.440587 4983 scope.go:117] "RemoveContainer" containerID="420cfcf298db8455384ae8635ec1ea1e6d3dd8278de901ac8ad966eb9eb9faf3" Nov 25 20:53:22 crc kubenswrapper[4983]: I1125 20:53:22.485970 4983 scope.go:117] "RemoveContainer" containerID="17be22fa7cf60d0d919532014e892f4b180e68e4e523db50add371c2f2f11562" Nov 25 20:53:22 crc kubenswrapper[4983]: I1125 20:53:22.528245 4983 scope.go:117] "RemoveContainer" containerID="03b1d1ffbb3fd44693d9596d39664bb323bb51a7a7e732c042877278f2b699aa" Nov 25 20:53:22 crc kubenswrapper[4983]: I1125 20:53:22.564102 4983 scope.go:117] "RemoveContainer" containerID="46dab533f0ad3a4c85a7b94041e3d6360ca71834afa248a30225039883bfe089" Nov 25 20:53:22 crc kubenswrapper[4983]: I1125 20:53:22.616717 4983 scope.go:117] "RemoveContainer" containerID="45bc78a7da72340435f26bf3f481c2ddb2ea1df2f26bf5d77fa040f113d460dd" Nov 25 20:53:24 crc kubenswrapper[4983]: I1125 20:53:24.068042 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qcbkt"] Nov 25 20:53:24 crc kubenswrapper[4983]: I1125 20:53:24.083941 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qcbkt"] Nov 25 20:53:25 crc kubenswrapper[4983]: I1125 20:53:25.626748 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1deb958-bfd0-4b82-bbf7-823375a53e6b" path="/var/lib/kubelet/pods/a1deb958-bfd0-4b82-bbf7-823375a53e6b/volumes" Nov 25 20:53:27 crc kubenswrapper[4983]: I1125 20:53:27.039717 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4vwwd"] Nov 25 20:53:27 crc kubenswrapper[4983]: I1125 20:53:27.048664 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4vwwd"] Nov 25 20:53:27 crc kubenswrapper[4983]: I1125 20:53:27.645764 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e111c438-3824-4fac-9db8-ce47d6974e6d" path="/var/lib/kubelet/pods/e111c438-3824-4fac-9db8-ce47d6974e6d/volumes" Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.037172 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-643c-account-create-update-fdrdb"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.044767 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-chrnq"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.051828 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-643c-account-create-update-fdrdb"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.059973 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-74s6m"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.067141 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-chrnq"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.074457 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-acb0-account-create-update-n7fkg"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.081688 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4cc5-account-create-update-q4b96"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.089351 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-74s6m"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.095729 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-acb0-account-create-update-n7fkg"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.102591 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4cc5-account-create-update-q4b96"] Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.644343 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06715857-e8a8-442e-8457-79b6e5506db4" path="/var/lib/kubelet/pods/06715857-e8a8-442e-8457-79b6e5506db4/volumes" Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.646228 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272f4d2e-dc3a-4db5-a10e-891f8143f934" path="/var/lib/kubelet/pods/272f4d2e-dc3a-4db5-a10e-891f8143f934/volumes" Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.648019 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8" path="/var/lib/kubelet/pods/47f6c0c4-91b9-48fe-80b5-7dcc2cbb25a8/volumes" Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.649198 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e138f00-2737-483b-ad2a-afd28c35e48b" path="/var/lib/kubelet/pods/5e138f00-2737-483b-ad2a-afd28c35e48b/volumes" Nov 25 20:53:31 crc kubenswrapper[4983]: I1125 20:53:31.650628 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b4648d-344a-4539-af4c-ddf7c8a23068" path="/var/lib/kubelet/pods/e8b4648d-344a-4539-af4c-ddf7c8a23068/volumes" Nov 25 20:53:35 crc kubenswrapper[4983]: I1125 20:53:35.071153 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wbb46"] Nov 25 20:53:35 crc kubenswrapper[4983]: I1125 20:53:35.110200 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wbb46"] Nov 25 20:53:35 crc kubenswrapper[4983]: I1125 20:53:35.619603 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82817911-bcba-4071-a24b-c6dbb6f1973d" path="/var/lib/kubelet/pods/82817911-bcba-4071-a24b-c6dbb6f1973d/volumes" Nov 25 20:53:58 crc kubenswrapper[4983]: I1125 20:53:58.807018 4983 generic.go:334] "Generic (PLEG): container finished" podID="b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" containerID="4c2d83f03b739a8bc371534c12e59e05b6dbc48d268c6b5dc10c6b0c5ac0201a" exitCode=0 Nov 25 20:53:58 crc kubenswrapper[4983]: I1125 20:53:58.807135 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" event={"ID":"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32","Type":"ContainerDied","Data":"4c2d83f03b739a8bc371534c12e59e05b6dbc48d268c6b5dc10c6b0c5ac0201a"} Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.407928 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.535652 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-inventory\") pod \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.535813 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-ssh-key\") pod \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.535952 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4hq5\" (UniqueName: \"kubernetes.io/projected/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-kube-api-access-v4hq5\") pod \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\" (UID: \"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32\") " Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.544772 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-kube-api-access-v4hq5" (OuterVolumeSpecName: "kube-api-access-v4hq5") pod "b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" (UID: "b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32"). InnerVolumeSpecName "kube-api-access-v4hq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.571538 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-inventory" (OuterVolumeSpecName: "inventory") pod "b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" (UID: "b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.579250 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" (UID: "b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.639417 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.639464 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.639482 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4hq5\" (UniqueName: \"kubernetes.io/projected/b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32-kube-api-access-v4hq5\") on node \"crc\" DevicePath \"\"" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.841130 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" event={"ID":"b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32","Type":"ContainerDied","Data":"210f55ede6649011a0ef4509f80ec82c08339301f3ec2725b4adce4e07cc14c5"} Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.841205 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="210f55ede6649011a0ef4509f80ec82c08339301f3ec2725b4adce4e07cc14c5" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.841349 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.979099 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l"] Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980076 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980124 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980149 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980158 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980179 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980187 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980201 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980209 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980222 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980230 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980249 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980259 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980278 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980285 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980302 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980312 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980329 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980340 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980358 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980367 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="extract-utilities" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980384 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980394 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980413 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980421 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="extract-content" Nov 25 20:54:00 crc kubenswrapper[4983]: E1125 20:54:00.980446 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980456 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980704 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="644f6e59-3cda-499d-bb0f-f75730d24ebd" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980726 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980738 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="69679ac7-ad44-4ea1-a9f4-5d7257108543" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980753 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="113cf747-b33d-4ed6-8846-055134ba5779" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.980786 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec4f40e-3e2d-4b34-b4ad-aa79d6a18ad2" containerName="registry-server" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.981719 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.984922 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.986281 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.990370 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.990759 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:54:00 crc kubenswrapper[4983]: I1125 20:54:00.992453 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l"] Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.151106 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.151861 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.151941 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6spcg\" (UniqueName: \"kubernetes.io/projected/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-kube-api-access-6spcg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.253882 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.253972 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6spcg\" (UniqueName: \"kubernetes.io/projected/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-kube-api-access-6spcg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.254079 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.261459 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.278311 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.283216 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6spcg\" (UniqueName: \"kubernetes.io/projected/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-kube-api-access-6spcg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bft2l\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.320839 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.936227 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l"] Nov 25 20:54:01 crc kubenswrapper[4983]: I1125 20:54:01.945330 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:54:02 crc kubenswrapper[4983]: I1125 20:54:02.871600 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" event={"ID":"0cc000c0-25d9-4390-b50f-da1ba38b6f7c","Type":"ContainerStarted","Data":"e5f2375642da4ab7623634851ea06a0fdef2a629d3af8ebce9d8ec985b9dd58c"} Nov 25 20:54:02 crc kubenswrapper[4983]: I1125 20:54:02.872266 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" event={"ID":"0cc000c0-25d9-4390-b50f-da1ba38b6f7c","Type":"ContainerStarted","Data":"5d331c442e93b18f9bac2f4460173b3edd06494434ad2e5783113fdf0f2dfe6f"} Nov 25 20:54:02 crc kubenswrapper[4983]: I1125 20:54:02.910425 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" podStartSLOduration=2.47108222 podStartE2EDuration="2.910402697s" podCreationTimestamp="2025-11-25 20:54:00 +0000 UTC" firstStartedPulling="2025-11-25 20:54:01.945032467 +0000 UTC m=+1623.057565869" lastFinishedPulling="2025-11-25 20:54:02.384352954 +0000 UTC m=+1623.496886346" observedRunningTime="2025-11-25 20:54:02.90597994 +0000 UTC m=+1624.018513342" watchObservedRunningTime="2025-11-25 20:54:02.910402697 +0000 UTC m=+1624.022936089" Nov 25 20:54:09 crc kubenswrapper[4983]: I1125 20:54:09.928326 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:54:09 crc kubenswrapper[4983]: I1125 20:54:09.929487 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:54:14 crc kubenswrapper[4983]: I1125 20:54:14.098542 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-sfn6q"] Nov 25 20:54:14 crc kubenswrapper[4983]: I1125 20:54:14.112899 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-sfn6q"] Nov 25 20:54:15 crc kubenswrapper[4983]: I1125 20:54:15.633184 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b7eb10-6c61-469a-87c6-d263f94dce5d" path="/var/lib/kubelet/pods/18b7eb10-6c61-469a-87c6-d263f94dce5d/volumes" Nov 25 20:54:22 crc kubenswrapper[4983]: I1125 20:54:22.923660 4983 scope.go:117] "RemoveContainer" containerID="2ca64f5188198958d454608cded6d3ba68059323a30eb2be7c1c5ceb9ba1b025" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.000486 4983 scope.go:117] "RemoveContainer" containerID="9f184195a370bb71623b3b0bafcf2536a9dffdaf9b4bba924fc04637ec784061" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.086240 4983 scope.go:117] "RemoveContainer" containerID="81dd2f779563d79c57b08ee10b6e08f743ae76217620cd159fa2338c43c05235" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.137147 4983 scope.go:117] "RemoveContainer" containerID="5fcf628b4ad5b400cd37b9d2299e2e52345e55871cb727acde3a2411c1f32ac3" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.174262 4983 scope.go:117] "RemoveContainer" containerID="303d8a8124cd096bd2b4359a1a71c40b7fe28f3e40e29d21c2df4a2223d429d2" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.246887 4983 scope.go:117] "RemoveContainer" containerID="bba2a6f29e1df6f26216b06258d0b8850199e07f4692208f41bb8a4415c5b716" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.282442 4983 scope.go:117] "RemoveContainer" containerID="61f182f7d550bc1e1cbbc455ed7ae33eccb4d02fe6091e934865cc72a74c1a0f" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.314035 4983 scope.go:117] "RemoveContainer" containerID="b559c3716b2a62a667298c768c865d8a72df6254d029ee4b7b279c846d248098" Nov 25 20:54:23 crc kubenswrapper[4983]: I1125 20:54:23.356159 4983 scope.go:117] "RemoveContainer" containerID="24a6f8e17355f0a6c7fb386636b3bb78048cec132699b90249e1ea1a7ad876df" Nov 25 20:54:24 crc kubenswrapper[4983]: I1125 20:54:24.048501 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hjjss"] Nov 25 20:54:24 crc kubenswrapper[4983]: I1125 20:54:24.059285 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hjjss"] Nov 25 20:54:24 crc kubenswrapper[4983]: I1125 20:54:24.072896 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vq68b"] Nov 25 20:54:24 crc kubenswrapper[4983]: I1125 20:54:24.094030 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vq68b"] Nov 25 20:54:25 crc kubenswrapper[4983]: I1125 20:54:25.632296 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6eb357f-e3ba-4631-951c-65760c2c707d" path="/var/lib/kubelet/pods/a6eb357f-e3ba-4631-951c-65760c2c707d/volumes" Nov 25 20:54:25 crc kubenswrapper[4983]: I1125 20:54:25.634052 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb577055-b6b9-4559-9f67-2253439acfc7" path="/var/lib/kubelet/pods/fb577055-b6b9-4559-9f67-2253439acfc7/volumes" Nov 25 20:54:34 crc kubenswrapper[4983]: I1125 20:54:34.075963 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h4hwg"] Nov 25 20:54:34 crc kubenswrapper[4983]: I1125 20:54:34.095248 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h4hwg"] Nov 25 20:54:35 crc kubenswrapper[4983]: I1125 20:54:35.629040 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24119f4e-9bb9-4f12-a031-03ec811465d1" path="/var/lib/kubelet/pods/24119f4e-9bb9-4f12-a031-03ec811465d1/volumes" Nov 25 20:54:39 crc kubenswrapper[4983]: I1125 20:54:39.928232 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:54:39 crc kubenswrapper[4983]: I1125 20:54:39.929293 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:54:42 crc kubenswrapper[4983]: I1125 20:54:42.041749 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7dskv"] Nov 25 20:54:42 crc kubenswrapper[4983]: I1125 20:54:42.077125 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7dskv"] Nov 25 20:54:43 crc kubenswrapper[4983]: I1125 20:54:43.638429 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca9d2b3-2f79-4d38-8427-f5bfae9fc314" path="/var/lib/kubelet/pods/cca9d2b3-2f79-4d38-8427-f5bfae9fc314/volumes" Nov 25 20:55:09 crc kubenswrapper[4983]: I1125 20:55:09.927835 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 20:55:09 crc kubenswrapper[4983]: I1125 20:55:09.928711 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 20:55:09 crc kubenswrapper[4983]: I1125 20:55:09.928784 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 20:55:09 crc kubenswrapper[4983]: I1125 20:55:09.929547 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 20:55:09 crc kubenswrapper[4983]: I1125 20:55:09.929723 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" gracePeriod=600 Nov 25 20:55:10 crc kubenswrapper[4983]: E1125 20:55:10.079274 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:55:10 crc kubenswrapper[4983]: I1125 20:55:10.402597 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574"} Nov 25 20:55:10 crc kubenswrapper[4983]: I1125 20:55:10.402680 4983 scope.go:117] "RemoveContainer" containerID="e12df31370c6ce33dc30cef4c0a5235025ed26a0ae83ddc51872ed125d9d82bb" Nov 25 20:55:10 crc kubenswrapper[4983]: I1125 20:55:10.402530 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" exitCode=0 Nov 25 20:55:10 crc kubenswrapper[4983]: I1125 20:55:10.403933 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:55:10 crc kubenswrapper[4983]: E1125 20:55:10.404460 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.080489 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-75cd-account-create-update-c6gvt"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.102364 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-p6qg5"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.117227 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-7brgc"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.130195 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pmc84"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.141151 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-9595-account-create-update-9qjxp"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.153947 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-p6qg5"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.164483 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-7brgc"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.173671 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-75cd-account-create-update-c6gvt"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.186629 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-9595-account-create-update-9qjxp"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.196246 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pmc84"] Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.619851 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35da62d7-c131-4115-9e20-9d412832b067" path="/var/lib/kubelet/pods/35da62d7-c131-4115-9e20-9d412832b067/volumes" Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.620705 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729f84c4-c7cb-446d-b881-187f884dfe16" path="/var/lib/kubelet/pods/729f84c4-c7cb-446d-b881-187f884dfe16/volumes" Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.621624 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d8435b-daff-48b1-848a-c846eddae231" path="/var/lib/kubelet/pods/85d8435b-daff-48b1-848a-c846eddae231/volumes" Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.622351 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcbeee6c-1328-4f1e-a53c-ba2f245620e6" path="/var/lib/kubelet/pods/fcbeee6c-1328-4f1e-a53c-ba2f245620e6/volumes" Nov 25 20:55:11 crc kubenswrapper[4983]: I1125 20:55:11.624958 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff314c7e-be05-483d-ac0e-7cccbd562ac4" path="/var/lib/kubelet/pods/ff314c7e-be05-483d-ac0e-7cccbd562ac4/volumes" Nov 25 20:55:12 crc kubenswrapper[4983]: I1125 20:55:12.043732 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f1c8-account-create-update-ck2cx"] Nov 25 20:55:12 crc kubenswrapper[4983]: I1125 20:55:12.065197 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f1c8-account-create-update-ck2cx"] Nov 25 20:55:13 crc kubenswrapper[4983]: I1125 20:55:13.625008 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9" path="/var/lib/kubelet/pods/65b5f19e-4365-4fb1-8d41-3d7b4cddf1c9/volumes" Nov 25 20:55:19 crc kubenswrapper[4983]: I1125 20:55:19.538666 4983 generic.go:334] "Generic (PLEG): container finished" podID="0cc000c0-25d9-4390-b50f-da1ba38b6f7c" containerID="e5f2375642da4ab7623634851ea06a0fdef2a629d3af8ebce9d8ec985b9dd58c" exitCode=0 Nov 25 20:55:19 crc kubenswrapper[4983]: I1125 20:55:19.538750 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" event={"ID":"0cc000c0-25d9-4390-b50f-da1ba38b6f7c","Type":"ContainerDied","Data":"e5f2375642da4ab7623634851ea06a0fdef2a629d3af8ebce9d8ec985b9dd58c"} Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.170880 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.351266 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6spcg\" (UniqueName: \"kubernetes.io/projected/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-kube-api-access-6spcg\") pod \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.351713 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-inventory\") pod \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.351927 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-ssh-key\") pod \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\" (UID: \"0cc000c0-25d9-4390-b50f-da1ba38b6f7c\") " Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.360129 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-kube-api-access-6spcg" (OuterVolumeSpecName: "kube-api-access-6spcg") pod "0cc000c0-25d9-4390-b50f-da1ba38b6f7c" (UID: "0cc000c0-25d9-4390-b50f-da1ba38b6f7c"). InnerVolumeSpecName "kube-api-access-6spcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.392763 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0cc000c0-25d9-4390-b50f-da1ba38b6f7c" (UID: "0cc000c0-25d9-4390-b50f-da1ba38b6f7c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.405804 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-inventory" (OuterVolumeSpecName: "inventory") pod "0cc000c0-25d9-4390-b50f-da1ba38b6f7c" (UID: "0cc000c0-25d9-4390-b50f-da1ba38b6f7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.455322 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6spcg\" (UniqueName: \"kubernetes.io/projected/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-kube-api-access-6spcg\") on node \"crc\" DevicePath \"\"" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.455391 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.455415 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cc000c0-25d9-4390-b50f-da1ba38b6f7c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.572159 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" event={"ID":"0cc000c0-25d9-4390-b50f-da1ba38b6f7c","Type":"ContainerDied","Data":"5d331c442e93b18f9bac2f4460173b3edd06494434ad2e5783113fdf0f2dfe6f"} Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.572232 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d331c442e93b18f9bac2f4460173b3edd06494434ad2e5783113fdf0f2dfe6f" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.572253 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bft2l" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.700059 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf"] Nov 25 20:55:21 crc kubenswrapper[4983]: E1125 20:55:21.700495 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc000c0-25d9-4390-b50f-da1ba38b6f7c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.700509 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc000c0-25d9-4390-b50f-da1ba38b6f7c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.700736 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc000c0-25d9-4390-b50f-da1ba38b6f7c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.705089 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.709812 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.710068 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.710256 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.717845 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.721316 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf"] Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.864361 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnrj\" (UniqueName: \"kubernetes.io/projected/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-kube-api-access-qtnrj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.864794 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.864893 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.967939 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtnrj\" (UniqueName: \"kubernetes.io/projected/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-kube-api-access-qtnrj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.968053 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.968190 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.974747 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.979782 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:21 crc kubenswrapper[4983]: I1125 20:55:21.997221 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtnrj\" (UniqueName: \"kubernetes.io/projected/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-kube-api-access-qtnrj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:22 crc kubenswrapper[4983]: I1125 20:55:22.067158 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:22 crc kubenswrapper[4983]: W1125 20:55:22.732703 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f2f45e7_9dd0_4273_bb23_9191f1a5ea93.slice/crio-ea3a8f47f5be51613174a6b594872a3a570371a6cda74d67b6786d809f2308a4 WatchSource:0}: Error finding container ea3a8f47f5be51613174a6b594872a3a570371a6cda74d67b6786d809f2308a4: Status 404 returned error can't find the container with id ea3a8f47f5be51613174a6b594872a3a570371a6cda74d67b6786d809f2308a4 Nov 25 20:55:22 crc kubenswrapper[4983]: I1125 20:55:22.742139 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf"] Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.626597 4983 scope.go:117] "RemoveContainer" containerID="1e923a438d7eafb4d9d3c08715c5e9d1e285c8932454a55c0ef794398df933b4" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.629157 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" event={"ID":"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93","Type":"ContainerStarted","Data":"12d70970db8fc0b10931856e0007dab8ada21cab75fdf20bb845878e84edadf4"} Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.629964 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" event={"ID":"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93","Type":"ContainerStarted","Data":"ea3a8f47f5be51613174a6b594872a3a570371a6cda74d67b6786d809f2308a4"} Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.647410 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" podStartSLOduration=2.179430662 podStartE2EDuration="2.647386266s" podCreationTimestamp="2025-11-25 20:55:21 +0000 UTC" firstStartedPulling="2025-11-25 20:55:22.736939561 +0000 UTC m=+1703.849472993" lastFinishedPulling="2025-11-25 20:55:23.204895205 +0000 UTC m=+1704.317428597" observedRunningTime="2025-11-25 20:55:23.63394542 +0000 UTC m=+1704.746478862" watchObservedRunningTime="2025-11-25 20:55:23.647386266 +0000 UTC m=+1704.759919658" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.665165 4983 scope.go:117] "RemoveContainer" containerID="0b21eb4d67ba6bcd862bbc3fe962e33fc24484160ab9cd33ac78af8ad35f819c" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.726621 4983 scope.go:117] "RemoveContainer" containerID="ec64a40892fe78b1305bb6754abc4d497590ae4dbe914a7a084c6c27d7eae9cd" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.765065 4983 scope.go:117] "RemoveContainer" containerID="1335540bf91f4cafb344bc9dd59382aaadc51c67d9fc88c9019bd0cc3beceda5" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.835941 4983 scope.go:117] "RemoveContainer" containerID="aca4846fb6d5c4f7566e11c39b267a4c806dc7a77e95b4f2fdb3566caef3d71f" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.878924 4983 scope.go:117] "RemoveContainer" containerID="63d82d90e6c2828de264e253cfaa46441b2d82717124c306419de808bca627f0" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.936888 4983 scope.go:117] "RemoveContainer" containerID="8697d144e10f252a712b92f758f2f90109cb1189ac8f408ec56c30a80c22ca34" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.969320 4983 scope.go:117] "RemoveContainer" containerID="113a01566d9c9601471efe0ccdbc6764dafd000b685623f477017d0f16c90945" Nov 25 20:55:23 crc kubenswrapper[4983]: I1125 20:55:23.998727 4983 scope.go:117] "RemoveContainer" containerID="48821378d060b6bb6a864bf11e6032ff75d7f9269c07c2bfcc7f3106f7185938" Nov 25 20:55:24 crc kubenswrapper[4983]: I1125 20:55:24.030320 4983 scope.go:117] "RemoveContainer" containerID="4c1311917ff5b05a49a4bceab6a8e15b2440be05d7516aa4f61b1325671b1c9a" Nov 25 20:55:24 crc kubenswrapper[4983]: I1125 20:55:24.606188 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:55:24 crc kubenswrapper[4983]: E1125 20:55:24.606604 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:55:29 crc kubenswrapper[4983]: I1125 20:55:29.710400 4983 generic.go:334] "Generic (PLEG): container finished" podID="5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" containerID="12d70970db8fc0b10931856e0007dab8ada21cab75fdf20bb845878e84edadf4" exitCode=0 Nov 25 20:55:29 crc kubenswrapper[4983]: I1125 20:55:29.710508 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" event={"ID":"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93","Type":"ContainerDied","Data":"12d70970db8fc0b10931856e0007dab8ada21cab75fdf20bb845878e84edadf4"} Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.334492 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.511505 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-ssh-key\") pod \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.511616 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-inventory\") pod \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.512078 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtnrj\" (UniqueName: \"kubernetes.io/projected/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-kube-api-access-qtnrj\") pod \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\" (UID: \"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93\") " Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.524941 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-kube-api-access-qtnrj" (OuterVolumeSpecName: "kube-api-access-qtnrj") pod "5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" (UID: "5f2f45e7-9dd0-4273-bb23-9191f1a5ea93"). InnerVolumeSpecName "kube-api-access-qtnrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.545151 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-inventory" (OuterVolumeSpecName: "inventory") pod "5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" (UID: "5f2f45e7-9dd0-4273-bb23-9191f1a5ea93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.555907 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" (UID: "5f2f45e7-9dd0-4273-bb23-9191f1a5ea93"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.616242 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.619114 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.619259 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtnrj\" (UniqueName: \"kubernetes.io/projected/5f2f45e7-9dd0-4273-bb23-9191f1a5ea93-kube-api-access-qtnrj\") on node \"crc\" DevicePath \"\"" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.736248 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" event={"ID":"5f2f45e7-9dd0-4273-bb23-9191f1a5ea93","Type":"ContainerDied","Data":"ea3a8f47f5be51613174a6b594872a3a570371a6cda74d67b6786d809f2308a4"} Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.736525 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea3a8f47f5be51613174a6b594872a3a570371a6cda74d67b6786d809f2308a4" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.736700 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.830157 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn"] Nov 25 20:55:31 crc kubenswrapper[4983]: E1125 20:55:31.830775 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.830801 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.831108 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f2f45e7-9dd0-4273-bb23-9191f1a5ea93" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.832149 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.840502 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.840583 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.840869 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.841321 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:55:31 crc kubenswrapper[4983]: I1125 20:55:31.859814 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn"] Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.029337 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.029476 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.029510 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszgt\" (UniqueName: \"kubernetes.io/projected/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-kube-api-access-jszgt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.130760 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.130854 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.130887 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszgt\" (UniqueName: \"kubernetes.io/projected/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-kube-api-access-jszgt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.137441 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.142030 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.158859 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszgt\" (UniqueName: \"kubernetes.io/projected/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-kube-api-access-jszgt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jt8gn\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.159782 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:55:32 crc kubenswrapper[4983]: I1125 20:55:32.808749 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn"] Nov 25 20:55:33 crc kubenswrapper[4983]: I1125 20:55:33.765934 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" event={"ID":"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf","Type":"ContainerStarted","Data":"659b928cd7370db65b3f88013294171e64172e4c9bad5c73b8e51045f7043ffa"} Nov 25 20:55:33 crc kubenswrapper[4983]: I1125 20:55:33.766949 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" event={"ID":"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf","Type":"ContainerStarted","Data":"c184c30c01c60c566a53474e68852648d8a19972bfc9b9c2cd2fc797a47bf009"} Nov 25 20:55:33 crc kubenswrapper[4983]: I1125 20:55:33.801088 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" podStartSLOduration=2.319722826 podStartE2EDuration="2.801062936s" podCreationTimestamp="2025-11-25 20:55:31 +0000 UTC" firstStartedPulling="2025-11-25 20:55:32.81280126 +0000 UTC m=+1713.925334662" lastFinishedPulling="2025-11-25 20:55:33.29414138 +0000 UTC m=+1714.406674772" observedRunningTime="2025-11-25 20:55:33.785244558 +0000 UTC m=+1714.897777980" watchObservedRunningTime="2025-11-25 20:55:33.801062936 +0000 UTC m=+1714.913596328" Nov 25 20:55:38 crc kubenswrapper[4983]: I1125 20:55:38.607758 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:55:38 crc kubenswrapper[4983]: E1125 20:55:38.608890 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:55:39 crc kubenswrapper[4983]: I1125 20:55:39.045803 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6fck8"] Nov 25 20:55:39 crc kubenswrapper[4983]: I1125 20:55:39.059417 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6fck8"] Nov 25 20:55:39 crc kubenswrapper[4983]: I1125 20:55:39.643395 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ccc433-0041-48b3-906e-8b7ff8ef57ab" path="/var/lib/kubelet/pods/06ccc433-0041-48b3-906e-8b7ff8ef57ab/volumes" Nov 25 20:55:53 crc kubenswrapper[4983]: I1125 20:55:53.605638 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:55:53 crc kubenswrapper[4983]: E1125 20:55:53.606819 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:55:58 crc kubenswrapper[4983]: I1125 20:55:58.049726 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwl6b"] Nov 25 20:55:58 crc kubenswrapper[4983]: I1125 20:55:58.071538 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-bhd54"] Nov 25 20:55:58 crc kubenswrapper[4983]: I1125 20:55:58.082125 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-bhd54"] Nov 25 20:55:58 crc kubenswrapper[4983]: I1125 20:55:58.091943 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwl6b"] Nov 25 20:55:59 crc kubenswrapper[4983]: I1125 20:55:59.628703 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ae5733a-6f6c-40cb-bc80-0110e4549e58" path="/var/lib/kubelet/pods/4ae5733a-6f6c-40cb-bc80-0110e4549e58/volumes" Nov 25 20:55:59 crc kubenswrapper[4983]: I1125 20:55:59.630858 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bdd9b1-6872-4f3c-afe2-4d403b0db52b" path="/var/lib/kubelet/pods/81bdd9b1-6872-4f3c-afe2-4d403b0db52b/volumes" Nov 25 20:56:06 crc kubenswrapper[4983]: I1125 20:56:06.605752 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:56:06 crc kubenswrapper[4983]: E1125 20:56:06.607518 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:56:19 crc kubenswrapper[4983]: I1125 20:56:19.621986 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:56:19 crc kubenswrapper[4983]: E1125 20:56:19.622968 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:56:21 crc kubenswrapper[4983]: I1125 20:56:21.446076 4983 generic.go:334] "Generic (PLEG): container finished" podID="aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" containerID="659b928cd7370db65b3f88013294171e64172e4c9bad5c73b8e51045f7043ffa" exitCode=0 Nov 25 20:56:21 crc kubenswrapper[4983]: I1125 20:56:21.446206 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" event={"ID":"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf","Type":"ContainerDied","Data":"659b928cd7370db65b3f88013294171e64172e4c9bad5c73b8e51045f7043ffa"} Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.006144 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.066964 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-ssh-key\") pod \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.067430 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jszgt\" (UniqueName: \"kubernetes.io/projected/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-kube-api-access-jszgt\") pod \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.067664 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-inventory\") pod \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\" (UID: \"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf\") " Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.075989 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-kube-api-access-jszgt" (OuterVolumeSpecName: "kube-api-access-jszgt") pod "aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" (UID: "aa0b2190-bcf1-4f2a-8e87-4805b514d3bf"). InnerVolumeSpecName "kube-api-access-jszgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.118192 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-inventory" (OuterVolumeSpecName: "inventory") pod "aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" (UID: "aa0b2190-bcf1-4f2a-8e87-4805b514d3bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.118653 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" (UID: "aa0b2190-bcf1-4f2a-8e87-4805b514d3bf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.169238 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.169287 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.169301 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jszgt\" (UniqueName: \"kubernetes.io/projected/aa0b2190-bcf1-4f2a-8e87-4805b514d3bf-kube-api-access-jszgt\") on node \"crc\" DevicePath \"\"" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.474911 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" event={"ID":"aa0b2190-bcf1-4f2a-8e87-4805b514d3bf","Type":"ContainerDied","Data":"c184c30c01c60c566a53474e68852648d8a19972bfc9b9c2cd2fc797a47bf009"} Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.474984 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c184c30c01c60c566a53474e68852648d8a19972bfc9b9c2cd2fc797a47bf009" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.475135 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jt8gn" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.633177 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp"] Nov 25 20:56:23 crc kubenswrapper[4983]: E1125 20:56:23.633898 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.633935 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.634426 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0b2190-bcf1-4f2a-8e87-4805b514d3bf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.635822 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.639359 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.639832 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.640816 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.641209 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.644858 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp"] Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.685106 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshls\" (UniqueName: \"kubernetes.io/projected/da7ae86f-6623-4fd0-b7f1-ad16a2056571-kube-api-access-tshls\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.685405 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.685713 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.792504 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.793915 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tshls\" (UniqueName: \"kubernetes.io/projected/da7ae86f-6623-4fd0-b7f1-ad16a2056571-kube-api-access-tshls\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.794021 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.799747 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.800604 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.831640 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tshls\" (UniqueName: \"kubernetes.io/projected/da7ae86f-6623-4fd0-b7f1-ad16a2056571-kube-api-access-tshls\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:23 crc kubenswrapper[4983]: I1125 20:56:23.976094 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:56:24 crc kubenswrapper[4983]: I1125 20:56:24.335411 4983 scope.go:117] "RemoveContainer" containerID="0f02a57f92677e8980e72f4f15170a2424d745ec975617511dfadab782547771" Nov 25 20:56:24 crc kubenswrapper[4983]: I1125 20:56:24.395901 4983 scope.go:117] "RemoveContainer" containerID="88b7fc1934a5ad182064fdbcdabaef6361bcfe5a1eb614e2f2c498851f83eb08" Nov 25 20:56:24 crc kubenswrapper[4983]: I1125 20:56:24.442190 4983 scope.go:117] "RemoveContainer" containerID="6c0510021931b2ce8a13c807efb5bd223725de7efb4d20772ed2ffe98ce28223" Nov 25 20:56:24 crc kubenswrapper[4983]: I1125 20:56:24.611199 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp"] Nov 25 20:56:25 crc kubenswrapper[4983]: I1125 20:56:25.513876 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" event={"ID":"da7ae86f-6623-4fd0-b7f1-ad16a2056571","Type":"ContainerStarted","Data":"7b47b7e6ba6e5fc30efa96de343b77bd97277a204c96f9e467d8ae4d770bbd2a"} Nov 25 20:56:25 crc kubenswrapper[4983]: I1125 20:56:25.514390 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" event={"ID":"da7ae86f-6623-4fd0-b7f1-ad16a2056571","Type":"ContainerStarted","Data":"e213991bb972982fa66013c866e3f3bba812c7c9e5e62479aae2d2f5e76c5f8f"} Nov 25 20:56:25 crc kubenswrapper[4983]: I1125 20:56:25.557865 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" podStartSLOduration=2.105304143 podStartE2EDuration="2.55784685s" podCreationTimestamp="2025-11-25 20:56:23 +0000 UTC" firstStartedPulling="2025-11-25 20:56:24.60945912 +0000 UTC m=+1765.721992512" lastFinishedPulling="2025-11-25 20:56:25.062001817 +0000 UTC m=+1766.174535219" observedRunningTime="2025-11-25 20:56:25.54614196 +0000 UTC m=+1766.658675372" watchObservedRunningTime="2025-11-25 20:56:25.55784685 +0000 UTC m=+1766.670380242" Nov 25 20:56:34 crc kubenswrapper[4983]: I1125 20:56:34.605423 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:56:34 crc kubenswrapper[4983]: E1125 20:56:34.606540 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:56:42 crc kubenswrapper[4983]: I1125 20:56:42.055121 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q7h2t"] Nov 25 20:56:42 crc kubenswrapper[4983]: I1125 20:56:42.071381 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q7h2t"] Nov 25 20:56:43 crc kubenswrapper[4983]: I1125 20:56:43.631348 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a28d6ec9-9763-4034-ada5-549b22bf6607" path="/var/lib/kubelet/pods/a28d6ec9-9763-4034-ada5-549b22bf6607/volumes" Nov 25 20:56:46 crc kubenswrapper[4983]: I1125 20:56:46.607987 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:56:46 crc kubenswrapper[4983]: E1125 20:56:46.610977 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:56:57 crc kubenswrapper[4983]: I1125 20:56:57.605023 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:56:57 crc kubenswrapper[4983]: E1125 20:56:57.606229 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:57:12 crc kubenswrapper[4983]: I1125 20:57:12.605368 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:57:12 crc kubenswrapper[4983]: E1125 20:57:12.606541 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:57:24 crc kubenswrapper[4983]: I1125 20:57:24.611976 4983 scope.go:117] "RemoveContainer" containerID="28e4630680fced936f2d100c4632c7a0cc62c5fed9b7d1a72528a45e0ccf7215" Nov 25 20:57:27 crc kubenswrapper[4983]: I1125 20:57:27.605660 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:57:27 crc kubenswrapper[4983]: E1125 20:57:27.607255 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:57:29 crc kubenswrapper[4983]: I1125 20:57:29.496793 4983 generic.go:334] "Generic (PLEG): container finished" podID="da7ae86f-6623-4fd0-b7f1-ad16a2056571" containerID="7b47b7e6ba6e5fc30efa96de343b77bd97277a204c96f9e467d8ae4d770bbd2a" exitCode=0 Nov 25 20:57:29 crc kubenswrapper[4983]: I1125 20:57:29.496953 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" event={"ID":"da7ae86f-6623-4fd0-b7f1-ad16a2056571","Type":"ContainerDied","Data":"7b47b7e6ba6e5fc30efa96de343b77bd97277a204c96f9e467d8ae4d770bbd2a"} Nov 25 20:57:30 crc kubenswrapper[4983]: I1125 20:57:30.948739 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.033927 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-ssh-key\") pod \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.034086 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshls\" (UniqueName: \"kubernetes.io/projected/da7ae86f-6623-4fd0-b7f1-ad16a2056571-kube-api-access-tshls\") pod \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.034334 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-inventory\") pod \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\" (UID: \"da7ae86f-6623-4fd0-b7f1-ad16a2056571\") " Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.041580 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7ae86f-6623-4fd0-b7f1-ad16a2056571-kube-api-access-tshls" (OuterVolumeSpecName: "kube-api-access-tshls") pod "da7ae86f-6623-4fd0-b7f1-ad16a2056571" (UID: "da7ae86f-6623-4fd0-b7f1-ad16a2056571"). InnerVolumeSpecName "kube-api-access-tshls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.065904 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "da7ae86f-6623-4fd0-b7f1-ad16a2056571" (UID: "da7ae86f-6623-4fd0-b7f1-ad16a2056571"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.066624 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-inventory" (OuterVolumeSpecName: "inventory") pod "da7ae86f-6623-4fd0-b7f1-ad16a2056571" (UID: "da7ae86f-6623-4fd0-b7f1-ad16a2056571"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.137573 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tshls\" (UniqueName: \"kubernetes.io/projected/da7ae86f-6623-4fd0-b7f1-ad16a2056571-kube-api-access-tshls\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.137612 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.137625 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da7ae86f-6623-4fd0-b7f1-ad16a2056571-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.525951 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" event={"ID":"da7ae86f-6623-4fd0-b7f1-ad16a2056571","Type":"ContainerDied","Data":"e213991bb972982fa66013c866e3f3bba812c7c9e5e62479aae2d2f5e76c5f8f"} Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.526047 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e213991bb972982fa66013c866e3f3bba812c7c9e5e62479aae2d2f5e76c5f8f" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.526145 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.662737 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdnxr"] Nov 25 20:57:31 crc kubenswrapper[4983]: E1125 20:57:31.663471 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7ae86f-6623-4fd0-b7f1-ad16a2056571" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.663491 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7ae86f-6623-4fd0-b7f1-ad16a2056571" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.663783 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7ae86f-6623-4fd0-b7f1-ad16a2056571" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.672013 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.675114 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdnxr"] Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.678327 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.678763 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.678980 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.679170 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.753921 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.754005 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sdj\" (UniqueName: \"kubernetes.io/projected/3080e73e-fbc1-4a80-827c-386f923dd01b-kube-api-access-n6sdj\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.754490 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.857546 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.857757 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.857796 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6sdj\" (UniqueName: \"kubernetes.io/projected/3080e73e-fbc1-4a80-827c-386f923dd01b-kube-api-access-n6sdj\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.867351 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.886169 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:31 crc kubenswrapper[4983]: I1125 20:57:31.887254 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6sdj\" (UniqueName: \"kubernetes.io/projected/3080e73e-fbc1-4a80-827c-386f923dd01b-kube-api-access-n6sdj\") pod \"ssh-known-hosts-edpm-deployment-cdnxr\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:32 crc kubenswrapper[4983]: I1125 20:57:32.010218 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:32 crc kubenswrapper[4983]: I1125 20:57:32.661871 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cdnxr"] Nov 25 20:57:33 crc kubenswrapper[4983]: I1125 20:57:33.550077 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" event={"ID":"3080e73e-fbc1-4a80-827c-386f923dd01b","Type":"ContainerStarted","Data":"49084fd36482f56ec3a4df3862dd509e9b6df342372734219c36d3ef2a05aff8"} Nov 25 20:57:33 crc kubenswrapper[4983]: I1125 20:57:33.550502 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" event={"ID":"3080e73e-fbc1-4a80-827c-386f923dd01b","Type":"ContainerStarted","Data":"79624d58398812520453315be954145de635b7ed7f766006dc1876238d25b0c4"} Nov 25 20:57:33 crc kubenswrapper[4983]: I1125 20:57:33.590788 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" podStartSLOduration=2.07843217 podStartE2EDuration="2.59075163s" podCreationTimestamp="2025-11-25 20:57:31 +0000 UTC" firstStartedPulling="2025-11-25 20:57:32.664444234 +0000 UTC m=+1833.776977626" lastFinishedPulling="2025-11-25 20:57:33.176763674 +0000 UTC m=+1834.289297086" observedRunningTime="2025-11-25 20:57:33.573649518 +0000 UTC m=+1834.686182930" watchObservedRunningTime="2025-11-25 20:57:33.59075163 +0000 UTC m=+1834.703285032" Nov 25 20:57:41 crc kubenswrapper[4983]: I1125 20:57:41.605350 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:57:41 crc kubenswrapper[4983]: E1125 20:57:41.606414 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:57:41 crc kubenswrapper[4983]: I1125 20:57:41.669152 4983 generic.go:334] "Generic (PLEG): container finished" podID="3080e73e-fbc1-4a80-827c-386f923dd01b" containerID="49084fd36482f56ec3a4df3862dd509e9b6df342372734219c36d3ef2a05aff8" exitCode=0 Nov 25 20:57:41 crc kubenswrapper[4983]: I1125 20:57:41.669215 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" event={"ID":"3080e73e-fbc1-4a80-827c-386f923dd01b","Type":"ContainerDied","Data":"49084fd36482f56ec3a4df3862dd509e9b6df342372734219c36d3ef2a05aff8"} Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.184060 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.269140 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-ssh-key-openstack-edpm-ipam\") pod \"3080e73e-fbc1-4a80-827c-386f923dd01b\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.269369 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-inventory-0\") pod \"3080e73e-fbc1-4a80-827c-386f923dd01b\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.269674 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6sdj\" (UniqueName: \"kubernetes.io/projected/3080e73e-fbc1-4a80-827c-386f923dd01b-kube-api-access-n6sdj\") pod \"3080e73e-fbc1-4a80-827c-386f923dd01b\" (UID: \"3080e73e-fbc1-4a80-827c-386f923dd01b\") " Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.278549 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3080e73e-fbc1-4a80-827c-386f923dd01b-kube-api-access-n6sdj" (OuterVolumeSpecName: "kube-api-access-n6sdj") pod "3080e73e-fbc1-4a80-827c-386f923dd01b" (UID: "3080e73e-fbc1-4a80-827c-386f923dd01b"). InnerVolumeSpecName "kube-api-access-n6sdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.313678 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3080e73e-fbc1-4a80-827c-386f923dd01b" (UID: "3080e73e-fbc1-4a80-827c-386f923dd01b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.323873 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "3080e73e-fbc1-4a80-827c-386f923dd01b" (UID: "3080e73e-fbc1-4a80-827c-386f923dd01b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.372826 4983 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.372924 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6sdj\" (UniqueName: \"kubernetes.io/projected/3080e73e-fbc1-4a80-827c-386f923dd01b-kube-api-access-n6sdj\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.372951 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3080e73e-fbc1-4a80-827c-386f923dd01b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.701286 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" event={"ID":"3080e73e-fbc1-4a80-827c-386f923dd01b","Type":"ContainerDied","Data":"79624d58398812520453315be954145de635b7ed7f766006dc1876238d25b0c4"} Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.701341 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79624d58398812520453315be954145de635b7ed7f766006dc1876238d25b0c4" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.701473 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cdnxr" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.809448 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm"] Nov 25 20:57:43 crc kubenswrapper[4983]: E1125 20:57:43.810116 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3080e73e-fbc1-4a80-827c-386f923dd01b" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.810141 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="3080e73e-fbc1-4a80-827c-386f923dd01b" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.810525 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="3080e73e-fbc1-4a80-827c-386f923dd01b" containerName="ssh-known-hosts-edpm-deployment" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.858996 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm"] Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.859601 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.865500 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.865515 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.865817 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.866273 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.885506 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.885773 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.886110 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wntn2\" (UniqueName: \"kubernetes.io/projected/283ae6fd-423e-4c78-9c5b-85aab813c0b5-kube-api-access-wntn2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.986972 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wntn2\" (UniqueName: \"kubernetes.io/projected/283ae6fd-423e-4c78-9c5b-85aab813c0b5-kube-api-access-wntn2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.987058 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.987154 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.995048 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:43 crc kubenswrapper[4983]: I1125 20:57:43.995064 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:44 crc kubenswrapper[4983]: I1125 20:57:44.011054 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wntn2\" (UniqueName: \"kubernetes.io/projected/283ae6fd-423e-4c78-9c5b-85aab813c0b5-kube-api-access-wntn2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6lsbm\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:44 crc kubenswrapper[4983]: I1125 20:57:44.187645 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:44 crc kubenswrapper[4983]: I1125 20:57:44.600826 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm"] Nov 25 20:57:44 crc kubenswrapper[4983]: I1125 20:57:44.713732 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" event={"ID":"283ae6fd-423e-4c78-9c5b-85aab813c0b5","Type":"ContainerStarted","Data":"ad8f71da7a71cbaeab28d0a8d1402147462af684ec5e6e68cb5bda61b4989724"} Nov 25 20:57:45 crc kubenswrapper[4983]: I1125 20:57:45.735159 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" event={"ID":"283ae6fd-423e-4c78-9c5b-85aab813c0b5","Type":"ContainerStarted","Data":"3b1770e45dae8a11369e2553bd309500979feccf915af2d15ee7c74c16c20247"} Nov 25 20:57:45 crc kubenswrapper[4983]: I1125 20:57:45.767082 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" podStartSLOduration=2.19494125 podStartE2EDuration="2.767057841s" podCreationTimestamp="2025-11-25 20:57:43 +0000 UTC" firstStartedPulling="2025-11-25 20:57:44.593138592 +0000 UTC m=+1845.705671994" lastFinishedPulling="2025-11-25 20:57:45.165255153 +0000 UTC m=+1846.277788585" observedRunningTime="2025-11-25 20:57:45.755275599 +0000 UTC m=+1846.867809001" watchObservedRunningTime="2025-11-25 20:57:45.767057841 +0000 UTC m=+1846.879591243" Nov 25 20:57:54 crc kubenswrapper[4983]: I1125 20:57:54.874893 4983 generic.go:334] "Generic (PLEG): container finished" podID="283ae6fd-423e-4c78-9c5b-85aab813c0b5" containerID="3b1770e45dae8a11369e2553bd309500979feccf915af2d15ee7c74c16c20247" exitCode=0 Nov 25 20:57:54 crc kubenswrapper[4983]: I1125 20:57:54.875017 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" event={"ID":"283ae6fd-423e-4c78-9c5b-85aab813c0b5","Type":"ContainerDied","Data":"3b1770e45dae8a11369e2553bd309500979feccf915af2d15ee7c74c16c20247"} Nov 25 20:57:55 crc kubenswrapper[4983]: I1125 20:57:55.605788 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:57:55 crc kubenswrapper[4983]: E1125 20:57:55.606998 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.419042 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.501925 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-ssh-key\") pod \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.503279 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wntn2\" (UniqueName: \"kubernetes.io/projected/283ae6fd-423e-4c78-9c5b-85aab813c0b5-kube-api-access-wntn2\") pod \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.503393 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-inventory\") pod \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\" (UID: \"283ae6fd-423e-4c78-9c5b-85aab813c0b5\") " Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.510074 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283ae6fd-423e-4c78-9c5b-85aab813c0b5-kube-api-access-wntn2" (OuterVolumeSpecName: "kube-api-access-wntn2") pod "283ae6fd-423e-4c78-9c5b-85aab813c0b5" (UID: "283ae6fd-423e-4c78-9c5b-85aab813c0b5"). InnerVolumeSpecName "kube-api-access-wntn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.556545 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-inventory" (OuterVolumeSpecName: "inventory") pod "283ae6fd-423e-4c78-9c5b-85aab813c0b5" (UID: "283ae6fd-423e-4c78-9c5b-85aab813c0b5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.560172 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "283ae6fd-423e-4c78-9c5b-85aab813c0b5" (UID: "283ae6fd-423e-4c78-9c5b-85aab813c0b5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.606541 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wntn2\" (UniqueName: \"kubernetes.io/projected/283ae6fd-423e-4c78-9c5b-85aab813c0b5-kube-api-access-wntn2\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.606649 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.606662 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283ae6fd-423e-4c78-9c5b-85aab813c0b5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.904636 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" event={"ID":"283ae6fd-423e-4c78-9c5b-85aab813c0b5","Type":"ContainerDied","Data":"ad8f71da7a71cbaeab28d0a8d1402147462af684ec5e6e68cb5bda61b4989724"} Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.904695 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8f71da7a71cbaeab28d0a8d1402147462af684ec5e6e68cb5bda61b4989724" Nov 25 20:57:56 crc kubenswrapper[4983]: I1125 20:57:56.904710 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6lsbm" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.022622 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt"] Nov 25 20:57:57 crc kubenswrapper[4983]: E1125 20:57:57.023970 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283ae6fd-423e-4c78-9c5b-85aab813c0b5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.024121 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="283ae6fd-423e-4c78-9c5b-85aab813c0b5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.024668 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="283ae6fd-423e-4c78-9c5b-85aab813c0b5" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.025908 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.028637 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.029247 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.029334 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.030278 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.043144 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt"] Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.116181 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.116580 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.116771 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6w2\" (UniqueName: \"kubernetes.io/projected/11f44fe7-6b39-418d-9c54-d0f05318f412-kube-api-access-7k6w2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.219040 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6w2\" (UniqueName: \"kubernetes.io/projected/11f44fe7-6b39-418d-9c54-d0f05318f412-kube-api-access-7k6w2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.219230 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.219267 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.231768 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.231933 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.254276 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6w2\" (UniqueName: \"kubernetes.io/projected/11f44fe7-6b39-418d-9c54-d0f05318f412-kube-api-access-7k6w2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:57 crc kubenswrapper[4983]: I1125 20:57:57.359340 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:57:58 crc kubenswrapper[4983]: I1125 20:57:58.012940 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt"] Nov 25 20:57:58 crc kubenswrapper[4983]: I1125 20:57:58.939661 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" event={"ID":"11f44fe7-6b39-418d-9c54-d0f05318f412","Type":"ContainerStarted","Data":"0c6ddb0b755a237d52c23449c0c7b3250133138aaca367312409cb23644e6718"} Nov 25 20:57:58 crc kubenswrapper[4983]: I1125 20:57:58.940874 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" event={"ID":"11f44fe7-6b39-418d-9c54-d0f05318f412","Type":"ContainerStarted","Data":"fe853c76712e8e548ef394353422a52417132b184f1ff211917eb2bba3ada40a"} Nov 25 20:57:58 crc kubenswrapper[4983]: I1125 20:57:58.969378 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" podStartSLOduration=2.542735644 podStartE2EDuration="2.969350795s" podCreationTimestamp="2025-11-25 20:57:56 +0000 UTC" firstStartedPulling="2025-11-25 20:57:58.007333854 +0000 UTC m=+1859.119867296" lastFinishedPulling="2025-11-25 20:57:58.433949045 +0000 UTC m=+1859.546482447" observedRunningTime="2025-11-25 20:57:58.961913548 +0000 UTC m=+1860.074446970" watchObservedRunningTime="2025-11-25 20:57:58.969350795 +0000 UTC m=+1860.081884217" Nov 25 20:58:07 crc kubenswrapper[4983]: I1125 20:58:07.605141 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:58:07 crc kubenswrapper[4983]: E1125 20:58:07.607878 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:58:09 crc kubenswrapper[4983]: E1125 20:58:09.711611 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11f44fe7_6b39_418d_9c54_d0f05318f412.slice/crio-conmon-0c6ddb0b755a237d52c23449c0c7b3250133138aaca367312409cb23644e6718.scope\": RecentStats: unable to find data in memory cache]" Nov 25 20:58:10 crc kubenswrapper[4983]: I1125 20:58:10.072018 4983 generic.go:334] "Generic (PLEG): container finished" podID="11f44fe7-6b39-418d-9c54-d0f05318f412" containerID="0c6ddb0b755a237d52c23449c0c7b3250133138aaca367312409cb23644e6718" exitCode=0 Nov 25 20:58:10 crc kubenswrapper[4983]: I1125 20:58:10.072118 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" event={"ID":"11f44fe7-6b39-418d-9c54-d0f05318f412","Type":"ContainerDied","Data":"0c6ddb0b755a237d52c23449c0c7b3250133138aaca367312409cb23644e6718"} Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.669007 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.783570 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-inventory\") pod \"11f44fe7-6b39-418d-9c54-d0f05318f412\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.783710 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k6w2\" (UniqueName: \"kubernetes.io/projected/11f44fe7-6b39-418d-9c54-d0f05318f412-kube-api-access-7k6w2\") pod \"11f44fe7-6b39-418d-9c54-d0f05318f412\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.783747 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-ssh-key\") pod \"11f44fe7-6b39-418d-9c54-d0f05318f412\" (UID: \"11f44fe7-6b39-418d-9c54-d0f05318f412\") " Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.791996 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f44fe7-6b39-418d-9c54-d0f05318f412-kube-api-access-7k6w2" (OuterVolumeSpecName: "kube-api-access-7k6w2") pod "11f44fe7-6b39-418d-9c54-d0f05318f412" (UID: "11f44fe7-6b39-418d-9c54-d0f05318f412"). InnerVolumeSpecName "kube-api-access-7k6w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.821794 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "11f44fe7-6b39-418d-9c54-d0f05318f412" (UID: "11f44fe7-6b39-418d-9c54-d0f05318f412"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.847041 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-inventory" (OuterVolumeSpecName: "inventory") pod "11f44fe7-6b39-418d-9c54-d0f05318f412" (UID: "11f44fe7-6b39-418d-9c54-d0f05318f412"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.887145 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.887192 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k6w2\" (UniqueName: \"kubernetes.io/projected/11f44fe7-6b39-418d-9c54-d0f05318f412-kube-api-access-7k6w2\") on node \"crc\" DevicePath \"\"" Nov 25 20:58:11 crc kubenswrapper[4983]: I1125 20:58:11.887213 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/11f44fe7-6b39-418d-9c54-d0f05318f412-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.098391 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" event={"ID":"11f44fe7-6b39-418d-9c54-d0f05318f412","Type":"ContainerDied","Data":"fe853c76712e8e548ef394353422a52417132b184f1ff211917eb2bba3ada40a"} Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.098451 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe853c76712e8e548ef394353422a52417132b184f1ff211917eb2bba3ada40a" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.098534 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.282597 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v"] Nov 25 20:58:12 crc kubenswrapper[4983]: E1125 20:58:12.283111 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f44fe7-6b39-418d-9c54-d0f05318f412" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.283131 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f44fe7-6b39-418d-9c54-d0f05318f412" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.283374 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f44fe7-6b39-418d-9c54-d0f05318f412" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.284191 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.287693 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.287876 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.288508 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.288616 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.289929 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.290264 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.297857 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.298498 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.302612 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v"] Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400022 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400154 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400189 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400260 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400281 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400306 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400327 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400354 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57fwb\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-kube-api-access-57fwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400376 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400404 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400435 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400474 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400519 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.400541 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503384 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503522 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503568 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503648 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503674 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503704 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503733 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503757 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57fwb\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-kube-api-access-57fwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503780 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503812 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503847 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503878 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503912 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.503934 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.511070 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.511251 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.511767 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.512029 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.512146 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.512355 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.512902 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.513446 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.515242 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.515489 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.515772 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.517427 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.517899 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.530067 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57fwb\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-kube-api-access-57fwb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:12 crc kubenswrapper[4983]: I1125 20:58:12.600493 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:58:13 crc kubenswrapper[4983]: W1125 20:58:13.256086 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode24ee2d5_4f5b_4102_a0ca_45f7aed3c7b8.slice/crio-b274b2e73fce34da75c8120cf764511c95f6dfec6bf922515a9f52cc6af0ac92 WatchSource:0}: Error finding container b274b2e73fce34da75c8120cf764511c95f6dfec6bf922515a9f52cc6af0ac92: Status 404 returned error can't find the container with id b274b2e73fce34da75c8120cf764511c95f6dfec6bf922515a9f52cc6af0ac92 Nov 25 20:58:13 crc kubenswrapper[4983]: I1125 20:58:13.262349 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v"] Nov 25 20:58:14 crc kubenswrapper[4983]: I1125 20:58:14.123011 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" event={"ID":"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8","Type":"ContainerStarted","Data":"53ec5303d3f463df064c49af28dc78094cd7882098d78bfa729c38b5f112c44f"} Nov 25 20:58:14 crc kubenswrapper[4983]: I1125 20:58:14.123490 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" event={"ID":"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8","Type":"ContainerStarted","Data":"b274b2e73fce34da75c8120cf764511c95f6dfec6bf922515a9f52cc6af0ac92"} Nov 25 20:58:14 crc kubenswrapper[4983]: I1125 20:58:14.162053 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" podStartSLOduration=1.75329794 podStartE2EDuration="2.162027527s" podCreationTimestamp="2025-11-25 20:58:12 +0000 UTC" firstStartedPulling="2025-11-25 20:58:13.259657895 +0000 UTC m=+1874.372191297" lastFinishedPulling="2025-11-25 20:58:13.668387492 +0000 UTC m=+1874.780920884" observedRunningTime="2025-11-25 20:58:14.149345642 +0000 UTC m=+1875.261879074" watchObservedRunningTime="2025-11-25 20:58:14.162027527 +0000 UTC m=+1875.274560959" Nov 25 20:58:20 crc kubenswrapper[4983]: I1125 20:58:20.605868 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:58:20 crc kubenswrapper[4983]: E1125 20:58:20.607087 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:58:32 crc kubenswrapper[4983]: I1125 20:58:32.606219 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:58:32 crc kubenswrapper[4983]: E1125 20:58:32.607400 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:58:43 crc kubenswrapper[4983]: I1125 20:58:43.606820 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:58:43 crc kubenswrapper[4983]: E1125 20:58:43.608652 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:58:56 crc kubenswrapper[4983]: I1125 20:58:56.605310 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:58:56 crc kubenswrapper[4983]: E1125 20:58:56.606839 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:59:01 crc kubenswrapper[4983]: I1125 20:59:01.812377 4983 generic.go:334] "Generic (PLEG): container finished" podID="e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" containerID="53ec5303d3f463df064c49af28dc78094cd7882098d78bfa729c38b5f112c44f" exitCode=0 Nov 25 20:59:01 crc kubenswrapper[4983]: I1125 20:59:01.813350 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" event={"ID":"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8","Type":"ContainerDied","Data":"53ec5303d3f463df064c49af28dc78094cd7882098d78bfa729c38b5f112c44f"} Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.341820 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440649 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440772 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-bootstrap-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440817 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-neutron-metadata-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440841 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ssh-key\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440888 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-inventory\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440914 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57fwb\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-kube-api-access-57fwb\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440942 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440982 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-telemetry-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.440999 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-libvirt-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.441022 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-repo-setup-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.441050 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-nova-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.441074 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ovn-combined-ca-bundle\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.441105 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.441143 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\" (UID: \"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8\") " Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.447773 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.449504 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.450289 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.451193 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.451428 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.451446 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.452232 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.452464 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-kube-api-access-57fwb" (OuterVolumeSpecName: "kube-api-access-57fwb") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "kube-api-access-57fwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.453094 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.454797 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.455152 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.455430 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.495597 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-inventory" (OuterVolumeSpecName: "inventory") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.496022 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" (UID: "e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543755 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543796 4983 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543815 4983 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543829 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543844 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543857 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57fwb\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-kube-api-access-57fwb\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543870 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543881 4983 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543893 4983 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543903 4983 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543915 4983 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543928 4983 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543939 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.543953 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.842081 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" event={"ID":"e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8","Type":"ContainerDied","Data":"b274b2e73fce34da75c8120cf764511c95f6dfec6bf922515a9f52cc6af0ac92"} Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.842148 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b274b2e73fce34da75c8120cf764511c95f6dfec6bf922515a9f52cc6af0ac92" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.842207 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.989041 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n"] Nov 25 20:59:03 crc kubenswrapper[4983]: E1125 20:59:03.989722 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.989753 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.990159 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.991332 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.994942 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.995228 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.995605 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.995925 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 20:59:03 crc kubenswrapper[4983]: I1125 20:59:03.996296 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.023599 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n"] Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.054695 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.054784 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.054823 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwpg\" (UniqueName: \"kubernetes.io/projected/71b3a358-6645-404b-8d14-cb6371e7fce4-kube-api-access-xgwpg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.054963 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.055025 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/71b3a358-6645-404b-8d14-cb6371e7fce4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.155964 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.156051 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/71b3a358-6645-404b-8d14-cb6371e7fce4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.156142 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.156175 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.156197 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgwpg\" (UniqueName: \"kubernetes.io/projected/71b3a358-6645-404b-8d14-cb6371e7fce4-kube-api-access-xgwpg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.158247 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/71b3a358-6645-404b-8d14-cb6371e7fce4-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.162901 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.163273 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.164230 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.193568 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgwpg\" (UniqueName: \"kubernetes.io/projected/71b3a358-6645-404b-8d14-cb6371e7fce4-kube-api-access-xgwpg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qkd9n\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.316585 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.920474 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n"] Nov 25 20:59:04 crc kubenswrapper[4983]: I1125 20:59:04.925465 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 20:59:05 crc kubenswrapper[4983]: I1125 20:59:05.871965 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" event={"ID":"71b3a358-6645-404b-8d14-cb6371e7fce4","Type":"ContainerStarted","Data":"a9c4a600a2483459530f961c0ef130932ed3a09edf4a527e55720cf960e226eb"} Nov 25 20:59:05 crc kubenswrapper[4983]: I1125 20:59:05.872653 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" event={"ID":"71b3a358-6645-404b-8d14-cb6371e7fce4","Type":"ContainerStarted","Data":"5c30e73a6b10567f5f2c7ceb786d52d270a9a4301b408f50621504058503414f"} Nov 25 20:59:05 crc kubenswrapper[4983]: I1125 20:59:05.905678 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" podStartSLOduration=2.485386211 podStartE2EDuration="2.905649663s" podCreationTimestamp="2025-11-25 20:59:03 +0000 UTC" firstStartedPulling="2025-11-25 20:59:04.92501768 +0000 UTC m=+1926.037551112" lastFinishedPulling="2025-11-25 20:59:05.345281172 +0000 UTC m=+1926.457814564" observedRunningTime="2025-11-25 20:59:05.900473106 +0000 UTC m=+1927.013006508" watchObservedRunningTime="2025-11-25 20:59:05.905649663 +0000 UTC m=+1927.018183085" Nov 25 20:59:09 crc kubenswrapper[4983]: I1125 20:59:09.618159 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:59:09 crc kubenswrapper[4983]: E1125 20:59:09.621831 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:59:24 crc kubenswrapper[4983]: I1125 20:59:24.604804 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:59:24 crc kubenswrapper[4983]: E1125 20:59:24.605787 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:59:37 crc kubenswrapper[4983]: I1125 20:59:37.605322 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:59:37 crc kubenswrapper[4983]: E1125 20:59:37.607904 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 20:59:52 crc kubenswrapper[4983]: I1125 20:59:52.604720 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 20:59:52 crc kubenswrapper[4983]: E1125 20:59:52.605761 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.177451 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq"] Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.181213 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.184166 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.185059 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.202302 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq"] Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.264897 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26441665-c12f-4d90-8d2b-bb965067a553-config-volume\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.265008 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26441665-c12f-4d90-8d2b-bb965067a553-secret-volume\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.265053 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpw7\" (UniqueName: \"kubernetes.io/projected/26441665-c12f-4d90-8d2b-bb965067a553-kube-api-access-xcpw7\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.367100 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26441665-c12f-4d90-8d2b-bb965067a553-secret-volume\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.367177 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpw7\" (UniqueName: \"kubernetes.io/projected/26441665-c12f-4d90-8d2b-bb965067a553-kube-api-access-xcpw7\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.367506 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26441665-c12f-4d90-8d2b-bb965067a553-config-volume\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.368861 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26441665-c12f-4d90-8d2b-bb965067a553-config-volume\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.380349 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26441665-c12f-4d90-8d2b-bb965067a553-secret-volume\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.390979 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpw7\" (UniqueName: \"kubernetes.io/projected/26441665-c12f-4d90-8d2b-bb965067a553-kube-api-access-xcpw7\") pod \"collect-profiles-29401740-7nmnq\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:00 crc kubenswrapper[4983]: I1125 21:00:00.505874 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:01 crc kubenswrapper[4983]: I1125 21:00:01.048300 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq"] Nov 25 21:00:01 crc kubenswrapper[4983]: W1125 21:00:01.059792 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26441665_c12f_4d90_8d2b_bb965067a553.slice/crio-d6498a7eb5bf9253ffbc71f505f803284fac72cabe54b8a5d460d86434fc6190 WatchSource:0}: Error finding container d6498a7eb5bf9253ffbc71f505f803284fac72cabe54b8a5d460d86434fc6190: Status 404 returned error can't find the container with id d6498a7eb5bf9253ffbc71f505f803284fac72cabe54b8a5d460d86434fc6190 Nov 25 21:00:01 crc kubenswrapper[4983]: I1125 21:00:01.606393 4983 generic.go:334] "Generic (PLEG): container finished" podID="26441665-c12f-4d90-8d2b-bb965067a553" containerID="0f1319be2281f97d18b515ea09f53f5fcf7c38707ec86c659f4d3432f3d6b24e" exitCode=0 Nov 25 21:00:01 crc kubenswrapper[4983]: I1125 21:00:01.617224 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" event={"ID":"26441665-c12f-4d90-8d2b-bb965067a553","Type":"ContainerDied","Data":"0f1319be2281f97d18b515ea09f53f5fcf7c38707ec86c659f4d3432f3d6b24e"} Nov 25 21:00:01 crc kubenswrapper[4983]: I1125 21:00:01.617271 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" event={"ID":"26441665-c12f-4d90-8d2b-bb965067a553","Type":"ContainerStarted","Data":"d6498a7eb5bf9253ffbc71f505f803284fac72cabe54b8a5d460d86434fc6190"} Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.009151 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.124789 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26441665-c12f-4d90-8d2b-bb965067a553-secret-volume\") pod \"26441665-c12f-4d90-8d2b-bb965067a553\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.124854 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcpw7\" (UniqueName: \"kubernetes.io/projected/26441665-c12f-4d90-8d2b-bb965067a553-kube-api-access-xcpw7\") pod \"26441665-c12f-4d90-8d2b-bb965067a553\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.125079 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26441665-c12f-4d90-8d2b-bb965067a553-config-volume\") pod \"26441665-c12f-4d90-8d2b-bb965067a553\" (UID: \"26441665-c12f-4d90-8d2b-bb965067a553\") " Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.125967 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26441665-c12f-4d90-8d2b-bb965067a553-config-volume" (OuterVolumeSpecName: "config-volume") pod "26441665-c12f-4d90-8d2b-bb965067a553" (UID: "26441665-c12f-4d90-8d2b-bb965067a553"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.131232 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26441665-c12f-4d90-8d2b-bb965067a553-kube-api-access-xcpw7" (OuterVolumeSpecName: "kube-api-access-xcpw7") pod "26441665-c12f-4d90-8d2b-bb965067a553" (UID: "26441665-c12f-4d90-8d2b-bb965067a553"). InnerVolumeSpecName "kube-api-access-xcpw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.131691 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26441665-c12f-4d90-8d2b-bb965067a553-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26441665-c12f-4d90-8d2b-bb965067a553" (UID: "26441665-c12f-4d90-8d2b-bb965067a553"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.227922 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26441665-c12f-4d90-8d2b-bb965067a553-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.227958 4983 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26441665-c12f-4d90-8d2b-bb965067a553-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.227968 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcpw7\" (UniqueName: \"kubernetes.io/projected/26441665-c12f-4d90-8d2b-bb965067a553-kube-api-access-xcpw7\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.633921 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" event={"ID":"26441665-c12f-4d90-8d2b-bb965067a553","Type":"ContainerDied","Data":"d6498a7eb5bf9253ffbc71f505f803284fac72cabe54b8a5d460d86434fc6190"} Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.633985 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401740-7nmnq" Nov 25 21:00:03 crc kubenswrapper[4983]: I1125 21:00:03.634623 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6498a7eb5bf9253ffbc71f505f803284fac72cabe54b8a5d460d86434fc6190" Nov 25 21:00:04 crc kubenswrapper[4983]: I1125 21:00:04.138981 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx"] Nov 25 21:00:04 crc kubenswrapper[4983]: I1125 21:00:04.154920 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401695-55fbx"] Nov 25 21:00:05 crc kubenswrapper[4983]: I1125 21:00:05.628549 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab247bf3-165b-4513-ad09-b33ce8fc15a8" path="/var/lib/kubelet/pods/ab247bf3-165b-4513-ad09-b33ce8fc15a8/volumes" Nov 25 21:00:07 crc kubenswrapper[4983]: I1125 21:00:07.606061 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 21:00:07 crc kubenswrapper[4983]: E1125 21:00:07.606894 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:00:20 crc kubenswrapper[4983]: I1125 21:00:20.905576 4983 generic.go:334] "Generic (PLEG): container finished" podID="71b3a358-6645-404b-8d14-cb6371e7fce4" containerID="a9c4a600a2483459530f961c0ef130932ed3a09edf4a527e55720cf960e226eb" exitCode=0 Nov 25 21:00:20 crc kubenswrapper[4983]: I1125 21:00:20.905744 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" event={"ID":"71b3a358-6645-404b-8d14-cb6371e7fce4","Type":"ContainerDied","Data":"a9c4a600a2483459530f961c0ef130932ed3a09edf4a527e55720cf960e226eb"} Nov 25 21:00:21 crc kubenswrapper[4983]: I1125 21:00:21.605631 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.347704 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.501026 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ssh-key\") pod \"71b3a358-6645-404b-8d14-cb6371e7fce4\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.501622 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/71b3a358-6645-404b-8d14-cb6371e7fce4-ovncontroller-config-0\") pod \"71b3a358-6645-404b-8d14-cb6371e7fce4\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.501989 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgwpg\" (UniqueName: \"kubernetes.io/projected/71b3a358-6645-404b-8d14-cb6371e7fce4-kube-api-access-xgwpg\") pod \"71b3a358-6645-404b-8d14-cb6371e7fce4\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.502313 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-inventory\") pod \"71b3a358-6645-404b-8d14-cb6371e7fce4\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.502775 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ovn-combined-ca-bundle\") pod \"71b3a358-6645-404b-8d14-cb6371e7fce4\" (UID: \"71b3a358-6645-404b-8d14-cb6371e7fce4\") " Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.510759 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "71b3a358-6645-404b-8d14-cb6371e7fce4" (UID: "71b3a358-6645-404b-8d14-cb6371e7fce4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.511682 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b3a358-6645-404b-8d14-cb6371e7fce4-kube-api-access-xgwpg" (OuterVolumeSpecName: "kube-api-access-xgwpg") pod "71b3a358-6645-404b-8d14-cb6371e7fce4" (UID: "71b3a358-6645-404b-8d14-cb6371e7fce4"). InnerVolumeSpecName "kube-api-access-xgwpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.543782 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-inventory" (OuterVolumeSpecName: "inventory") pod "71b3a358-6645-404b-8d14-cb6371e7fce4" (UID: "71b3a358-6645-404b-8d14-cb6371e7fce4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.545337 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "71b3a358-6645-404b-8d14-cb6371e7fce4" (UID: "71b3a358-6645-404b-8d14-cb6371e7fce4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.562187 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b3a358-6645-404b-8d14-cb6371e7fce4-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "71b3a358-6645-404b-8d14-cb6371e7fce4" (UID: "71b3a358-6645-404b-8d14-cb6371e7fce4"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.607038 4983 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.607100 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.607122 4983 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/71b3a358-6645-404b-8d14-cb6371e7fce4-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.607143 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgwpg\" (UniqueName: \"kubernetes.io/projected/71b3a358-6645-404b-8d14-cb6371e7fce4-kube-api-access-xgwpg\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.607161 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71b3a358-6645-404b-8d14-cb6371e7fce4-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.931844 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.931838 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qkd9n" event={"ID":"71b3a358-6645-404b-8d14-cb6371e7fce4","Type":"ContainerDied","Data":"5c30e73a6b10567f5f2c7ceb786d52d270a9a4301b408f50621504058503414f"} Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.932269 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c30e73a6b10567f5f2c7ceb786d52d270a9a4301b408f50621504058503414f" Nov 25 21:00:22 crc kubenswrapper[4983]: I1125 21:00:22.934901 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"7cb277ade04156b812a21af907f0408b5b1f6b691577a49cb53b8b6cc26f407f"} Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.063611 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph"] Nov 25 21:00:23 crc kubenswrapper[4983]: E1125 21:00:23.064067 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b3a358-6645-404b-8d14-cb6371e7fce4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.064087 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b3a358-6645-404b-8d14-cb6371e7fce4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 21:00:23 crc kubenswrapper[4983]: E1125 21:00:23.064108 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26441665-c12f-4d90-8d2b-bb965067a553" containerName="collect-profiles" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.064114 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="26441665-c12f-4d90-8d2b-bb965067a553" containerName="collect-profiles" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.064337 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b3a358-6645-404b-8d14-cb6371e7fce4" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.064359 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="26441665-c12f-4d90-8d2b-bb965067a553" containerName="collect-profiles" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.065072 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.069467 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.069639 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.069873 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.070028 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.070297 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.070540 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.085166 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph"] Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.222244 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.222325 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/996735a0-8e3c-4c62-9403-3e02669b7c63-kube-api-access-mvcvv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.222526 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.223035 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.223341 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.223386 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.324989 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.325076 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.325186 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.325234 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/996735a0-8e3c-4c62-9403-3e02669b7c63-kube-api-access-mvcvv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.325285 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.325400 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.334277 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.334837 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.336381 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.336986 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.342645 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.345204 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/996735a0-8e3c-4c62-9403-3e02669b7c63-kube-api-access-mvcvv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:23 crc kubenswrapper[4983]: I1125 21:00:23.389410 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:00:24 crc kubenswrapper[4983]: W1125 21:00:24.020198 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod996735a0_8e3c_4c62_9403_3e02669b7c63.slice/crio-e5315eff32f392d782b014fbf20ffc2a14748f34941226756e8b0a1b0433d46e WatchSource:0}: Error finding container e5315eff32f392d782b014fbf20ffc2a14748f34941226756e8b0a1b0433d46e: Status 404 returned error can't find the container with id e5315eff32f392d782b014fbf20ffc2a14748f34941226756e8b0a1b0433d46e Nov 25 21:00:24 crc kubenswrapper[4983]: I1125 21:00:24.025348 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph"] Nov 25 21:00:24 crc kubenswrapper[4983]: I1125 21:00:24.794636 4983 scope.go:117] "RemoveContainer" containerID="874971330c4615c40b2c77b0a2d79f04760f84ddb70cdf95a968f40aed4dd84a" Nov 25 21:00:24 crc kubenswrapper[4983]: I1125 21:00:24.982745 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" event={"ID":"996735a0-8e3c-4c62-9403-3e02669b7c63","Type":"ContainerStarted","Data":"95cdf3179c83f27af374a117e9281c70528a5a9cb3285376a61f0be529259e56"} Nov 25 21:00:24 crc kubenswrapper[4983]: I1125 21:00:24.982818 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" event={"ID":"996735a0-8e3c-4c62-9403-3e02669b7c63","Type":"ContainerStarted","Data":"e5315eff32f392d782b014fbf20ffc2a14748f34941226756e8b0a1b0433d46e"} Nov 25 21:00:25 crc kubenswrapper[4983]: I1125 21:00:25.016069 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" podStartSLOduration=1.617287708 podStartE2EDuration="2.016040782s" podCreationTimestamp="2025-11-25 21:00:23 +0000 UTC" firstStartedPulling="2025-11-25 21:00:24.023677318 +0000 UTC m=+2005.136210710" lastFinishedPulling="2025-11-25 21:00:24.422430362 +0000 UTC m=+2005.534963784" observedRunningTime="2025-11-25 21:00:25.006664974 +0000 UTC m=+2006.119198376" watchObservedRunningTime="2025-11-25 21:00:25.016040782 +0000 UTC m=+2006.128574184" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.169112 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401741-hmw9c"] Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.171407 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.199763 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401741-hmw9c"] Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.336716 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-fernet-keys\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.336777 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-config-data\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.337467 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-combined-ca-bundle\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.338312 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtz4\" (UniqueName: \"kubernetes.io/projected/294b565c-28a4-45a2-a9af-8eefed6b82a4-kube-api-access-fgtz4\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.440510 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtz4\" (UniqueName: \"kubernetes.io/projected/294b565c-28a4-45a2-a9af-8eefed6b82a4-kube-api-access-fgtz4\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.440715 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-fernet-keys\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.440762 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-config-data\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.440980 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-combined-ca-bundle\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.454057 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-combined-ca-bundle\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.454293 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-fernet-keys\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.455601 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-config-data\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.479919 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtz4\" (UniqueName: \"kubernetes.io/projected/294b565c-28a4-45a2-a9af-8eefed6b82a4-kube-api-access-fgtz4\") pod \"keystone-cron-29401741-hmw9c\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:00 crc kubenswrapper[4983]: I1125 21:01:00.505679 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:01 crc kubenswrapper[4983]: I1125 21:01:01.077243 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401741-hmw9c"] Nov 25 21:01:01 crc kubenswrapper[4983]: W1125 21:01:01.086000 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod294b565c_28a4_45a2_a9af_8eefed6b82a4.slice/crio-9622698d3c3c95c4d0f43745b8212438dd120e1e3f4eeb2b244f465e79bc804c WatchSource:0}: Error finding container 9622698d3c3c95c4d0f43745b8212438dd120e1e3f4eeb2b244f465e79bc804c: Status 404 returned error can't find the container with id 9622698d3c3c95c4d0f43745b8212438dd120e1e3f4eeb2b244f465e79bc804c Nov 25 21:01:01 crc kubenswrapper[4983]: I1125 21:01:01.483694 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-hmw9c" event={"ID":"294b565c-28a4-45a2-a9af-8eefed6b82a4","Type":"ContainerStarted","Data":"f881a04e4cb7ed674b815b3455832c7a675b018f6da10113dcafa4d5aa4fc884"} Nov 25 21:01:01 crc kubenswrapper[4983]: I1125 21:01:01.485362 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-hmw9c" event={"ID":"294b565c-28a4-45a2-a9af-8eefed6b82a4","Type":"ContainerStarted","Data":"9622698d3c3c95c4d0f43745b8212438dd120e1e3f4eeb2b244f465e79bc804c"} Nov 25 21:01:01 crc kubenswrapper[4983]: I1125 21:01:01.521912 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401741-hmw9c" podStartSLOduration=1.521884472 podStartE2EDuration="1.521884472s" podCreationTimestamp="2025-11-25 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 21:01:01.503104315 +0000 UTC m=+2042.615637707" watchObservedRunningTime="2025-11-25 21:01:01.521884472 +0000 UTC m=+2042.634417894" Nov 25 21:01:03 crc kubenswrapper[4983]: I1125 21:01:03.514661 4983 generic.go:334] "Generic (PLEG): container finished" podID="294b565c-28a4-45a2-a9af-8eefed6b82a4" containerID="f881a04e4cb7ed674b815b3455832c7a675b018f6da10113dcafa4d5aa4fc884" exitCode=0 Nov 25 21:01:03 crc kubenswrapper[4983]: I1125 21:01:03.514802 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-hmw9c" event={"ID":"294b565c-28a4-45a2-a9af-8eefed6b82a4","Type":"ContainerDied","Data":"f881a04e4cb7ed674b815b3455832c7a675b018f6da10113dcafa4d5aa4fc884"} Nov 25 21:01:04 crc kubenswrapper[4983]: I1125 21:01:04.892613 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:04 crc kubenswrapper[4983]: I1125 21:01:04.972864 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-combined-ca-bundle\") pod \"294b565c-28a4-45a2-a9af-8eefed6b82a4\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " Nov 25 21:01:04 crc kubenswrapper[4983]: I1125 21:01:04.973387 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-config-data\") pod \"294b565c-28a4-45a2-a9af-8eefed6b82a4\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.025654 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "294b565c-28a4-45a2-a9af-8eefed6b82a4" (UID: "294b565c-28a4-45a2-a9af-8eefed6b82a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.050578 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-config-data" (OuterVolumeSpecName: "config-data") pod "294b565c-28a4-45a2-a9af-8eefed6b82a4" (UID: "294b565c-28a4-45a2-a9af-8eefed6b82a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.074597 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-fernet-keys\") pod \"294b565c-28a4-45a2-a9af-8eefed6b82a4\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.074733 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgtz4\" (UniqueName: \"kubernetes.io/projected/294b565c-28a4-45a2-a9af-8eefed6b82a4-kube-api-access-fgtz4\") pod \"294b565c-28a4-45a2-a9af-8eefed6b82a4\" (UID: \"294b565c-28a4-45a2-a9af-8eefed6b82a4\") " Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.075107 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.075133 4983 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.079085 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294b565c-28a4-45a2-a9af-8eefed6b82a4-kube-api-access-fgtz4" (OuterVolumeSpecName: "kube-api-access-fgtz4") pod "294b565c-28a4-45a2-a9af-8eefed6b82a4" (UID: "294b565c-28a4-45a2-a9af-8eefed6b82a4"). InnerVolumeSpecName "kube-api-access-fgtz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.080009 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "294b565c-28a4-45a2-a9af-8eefed6b82a4" (UID: "294b565c-28a4-45a2-a9af-8eefed6b82a4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.178170 4983 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/294b565c-28a4-45a2-a9af-8eefed6b82a4-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.178225 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgtz4\" (UniqueName: \"kubernetes.io/projected/294b565c-28a4-45a2-a9af-8eefed6b82a4-kube-api-access-fgtz4\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.543098 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401741-hmw9c" event={"ID":"294b565c-28a4-45a2-a9af-8eefed6b82a4","Type":"ContainerDied","Data":"9622698d3c3c95c4d0f43745b8212438dd120e1e3f4eeb2b244f465e79bc804c"} Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.543166 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9622698d3c3c95c4d0f43745b8212438dd120e1e3f4eeb2b244f465e79bc804c" Nov 25 21:01:05 crc kubenswrapper[4983]: I1125 21:01:05.543271 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401741-hmw9c" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.390991 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9lwjh"] Nov 25 21:01:09 crc kubenswrapper[4983]: E1125 21:01:09.393236 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="294b565c-28a4-45a2-a9af-8eefed6b82a4" containerName="keystone-cron" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.393267 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="294b565c-28a4-45a2-a9af-8eefed6b82a4" containerName="keystone-cron" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.393773 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="294b565c-28a4-45a2-a9af-8eefed6b82a4" containerName="keystone-cron" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.396281 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.402594 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lwjh"] Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.488909 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-utilities\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.489135 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmkwm\" (UniqueName: \"kubernetes.io/projected/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-kube-api-access-bmkwm\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.489190 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-catalog-content\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.590689 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmkwm\" (UniqueName: \"kubernetes.io/projected/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-kube-api-access-bmkwm\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.590797 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-catalog-content\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.590856 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-utilities\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.591764 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-catalog-content\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.591840 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-utilities\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.616900 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmkwm\" (UniqueName: \"kubernetes.io/projected/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-kube-api-access-bmkwm\") pod \"community-operators-9lwjh\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:09 crc kubenswrapper[4983]: I1125 21:01:09.723801 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:10 crc kubenswrapper[4983]: I1125 21:01:10.374104 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lwjh"] Nov 25 21:01:10 crc kubenswrapper[4983]: I1125 21:01:10.610768 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerStarted","Data":"14eca94f1fb9893c1d0dfb8555888b50a8416f8405aee7feaf4682ac2f581791"} Nov 25 21:01:11 crc kubenswrapper[4983]: I1125 21:01:11.623702 4983 generic.go:334] "Generic (PLEG): container finished" podID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerID="57549843be299ffec533e4ed1257a8b9a19e1e59595efe548b618735692841d8" exitCode=0 Nov 25 21:01:11 crc kubenswrapper[4983]: I1125 21:01:11.624888 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerDied","Data":"57549843be299ffec533e4ed1257a8b9a19e1e59595efe548b618735692841d8"} Nov 25 21:01:12 crc kubenswrapper[4983]: I1125 21:01:12.637475 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerStarted","Data":"49703e1c471e4d18ddb1864f5890a6d631faa27d1f9697ca215a04818b365ea0"} Nov 25 21:01:13 crc kubenswrapper[4983]: I1125 21:01:13.655268 4983 generic.go:334] "Generic (PLEG): container finished" podID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerID="49703e1c471e4d18ddb1864f5890a6d631faa27d1f9697ca215a04818b365ea0" exitCode=0 Nov 25 21:01:13 crc kubenswrapper[4983]: I1125 21:01:13.655345 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerDied","Data":"49703e1c471e4d18ddb1864f5890a6d631faa27d1f9697ca215a04818b365ea0"} Nov 25 21:01:14 crc kubenswrapper[4983]: I1125 21:01:14.670662 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerStarted","Data":"349861ae95957106301b82da40bf16290b98ab436159cedfd2a6b3dea983118c"} Nov 25 21:01:14 crc kubenswrapper[4983]: I1125 21:01:14.705325 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9lwjh" podStartSLOduration=3.247220681 podStartE2EDuration="5.705295287s" podCreationTimestamp="2025-11-25 21:01:09 +0000 UTC" firstStartedPulling="2025-11-25 21:01:11.62791798 +0000 UTC m=+2052.740451402" lastFinishedPulling="2025-11-25 21:01:14.085992606 +0000 UTC m=+2055.198526008" observedRunningTime="2025-11-25 21:01:14.699854423 +0000 UTC m=+2055.812387855" watchObservedRunningTime="2025-11-25 21:01:14.705295287 +0000 UTC m=+2055.817828719" Nov 25 21:01:19 crc kubenswrapper[4983]: I1125 21:01:19.724501 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:19 crc kubenswrapper[4983]: I1125 21:01:19.726755 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:19 crc kubenswrapper[4983]: I1125 21:01:19.798822 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:20 crc kubenswrapper[4983]: I1125 21:01:20.896638 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:21 crc kubenswrapper[4983]: I1125 21:01:21.003784 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lwjh"] Nov 25 21:01:22 crc kubenswrapper[4983]: I1125 21:01:22.796064 4983 generic.go:334] "Generic (PLEG): container finished" podID="996735a0-8e3c-4c62-9403-3e02669b7c63" containerID="95cdf3179c83f27af374a117e9281c70528a5a9cb3285376a61f0be529259e56" exitCode=0 Nov 25 21:01:22 crc kubenswrapper[4983]: I1125 21:01:22.796144 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" event={"ID":"996735a0-8e3c-4c62-9403-3e02669b7c63","Type":"ContainerDied","Data":"95cdf3179c83f27af374a117e9281c70528a5a9cb3285376a61f0be529259e56"} Nov 25 21:01:22 crc kubenswrapper[4983]: I1125 21:01:22.797381 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9lwjh" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="registry-server" containerID="cri-o://349861ae95957106301b82da40bf16290b98ab436159cedfd2a6b3dea983118c" gracePeriod=2 Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.819908 4983 generic.go:334] "Generic (PLEG): container finished" podID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerID="349861ae95957106301b82da40bf16290b98ab436159cedfd2a6b3dea983118c" exitCode=0 Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.820029 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerDied","Data":"349861ae95957106301b82da40bf16290b98ab436159cedfd2a6b3dea983118c"} Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.821015 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lwjh" event={"ID":"34a3f087-c4ef-4047-b9f3-d8b7f5b44456","Type":"ContainerDied","Data":"14eca94f1fb9893c1d0dfb8555888b50a8416f8405aee7feaf4682ac2f581791"} Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.821049 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14eca94f1fb9893c1d0dfb8555888b50a8416f8405aee7feaf4682ac2f581791" Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.857296 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.964101 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-catalog-content\") pod \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.964154 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmkwm\" (UniqueName: \"kubernetes.io/projected/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-kube-api-access-bmkwm\") pod \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.964549 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-utilities\") pod \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\" (UID: \"34a3f087-c4ef-4047-b9f3-d8b7f5b44456\") " Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.966623 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-utilities" (OuterVolumeSpecName: "utilities") pod "34a3f087-c4ef-4047-b9f3-d8b7f5b44456" (UID: "34a3f087-c4ef-4047-b9f3-d8b7f5b44456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:01:23 crc kubenswrapper[4983]: I1125 21:01:23.973998 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-kube-api-access-bmkwm" (OuterVolumeSpecName: "kube-api-access-bmkwm") pod "34a3f087-c4ef-4047-b9f3-d8b7f5b44456" (UID: "34a3f087-c4ef-4047-b9f3-d8b7f5b44456"). InnerVolumeSpecName "kube-api-access-bmkwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.035000 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34a3f087-c4ef-4047-b9f3-d8b7f5b44456" (UID: "34a3f087-c4ef-4047-b9f3-d8b7f5b44456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.067579 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.067633 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmkwm\" (UniqueName: \"kubernetes.io/projected/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-kube-api-access-bmkwm\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.067647 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34a3f087-c4ef-4047-b9f3-d8b7f5b44456-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.299386 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.373225 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-metadata-combined-ca-bundle\") pod \"996735a0-8e3c-4c62-9403-3e02669b7c63\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.373623 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-inventory\") pod \"996735a0-8e3c-4c62-9403-3e02669b7c63\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.373678 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-ssh-key\") pod \"996735a0-8e3c-4c62-9403-3e02669b7c63\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.373751 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/996735a0-8e3c-4c62-9403-3e02669b7c63-kube-api-access-mvcvv\") pod \"996735a0-8e3c-4c62-9403-3e02669b7c63\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.373870 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"996735a0-8e3c-4c62-9403-3e02669b7c63\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.373947 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-nova-metadata-neutron-config-0\") pod \"996735a0-8e3c-4c62-9403-3e02669b7c63\" (UID: \"996735a0-8e3c-4c62-9403-3e02669b7c63\") " Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.388975 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "996735a0-8e3c-4c62-9403-3e02669b7c63" (UID: "996735a0-8e3c-4c62-9403-3e02669b7c63"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.388999 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996735a0-8e3c-4c62-9403-3e02669b7c63-kube-api-access-mvcvv" (OuterVolumeSpecName: "kube-api-access-mvcvv") pod "996735a0-8e3c-4c62-9403-3e02669b7c63" (UID: "996735a0-8e3c-4c62-9403-3e02669b7c63"). InnerVolumeSpecName "kube-api-access-mvcvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.409345 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "996735a0-8e3c-4c62-9403-3e02669b7c63" (UID: "996735a0-8e3c-4c62-9403-3e02669b7c63"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.410985 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "996735a0-8e3c-4c62-9403-3e02669b7c63" (UID: "996735a0-8e3c-4c62-9403-3e02669b7c63"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.421253 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-inventory" (OuterVolumeSpecName: "inventory") pod "996735a0-8e3c-4c62-9403-3e02669b7c63" (UID: "996735a0-8e3c-4c62-9403-3e02669b7c63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.425654 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "996735a0-8e3c-4c62-9403-3e02669b7c63" (UID: "996735a0-8e3c-4c62-9403-3e02669b7c63"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.476395 4983 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.476445 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.476456 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.476469 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvcvv\" (UniqueName: \"kubernetes.io/projected/996735a0-8e3c-4c62-9403-3e02669b7c63-kube-api-access-mvcvv\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.476482 4983 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.476492 4983 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/996735a0-8e3c-4c62-9403-3e02669b7c63-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.839025 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lwjh" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.839037 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" event={"ID":"996735a0-8e3c-4c62-9403-3e02669b7c63","Type":"ContainerDied","Data":"e5315eff32f392d782b014fbf20ffc2a14748f34941226756e8b0a1b0433d46e"} Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.839642 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5315eff32f392d782b014fbf20ffc2a14748f34941226756e8b0a1b0433d46e" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.839085 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph" Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.933719 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lwjh"] Nov 25 21:01:24 crc kubenswrapper[4983]: I1125 21:01:24.943917 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9lwjh"] Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107184 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c"] Nov 25 21:01:25 crc kubenswrapper[4983]: E1125 21:01:25.107629 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="extract-content" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107643 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="extract-content" Nov 25 21:01:25 crc kubenswrapper[4983]: E1125 21:01:25.107665 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996735a0-8e3c-4c62-9403-3e02669b7c63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107672 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="996735a0-8e3c-4c62-9403-3e02669b7c63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 21:01:25 crc kubenswrapper[4983]: E1125 21:01:25.107687 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="extract-utilities" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107695 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="extract-utilities" Nov 25 21:01:25 crc kubenswrapper[4983]: E1125 21:01:25.107724 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="registry-server" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107731 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="registry-server" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107959 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" containerName="registry-server" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.107992 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="996735a0-8e3c-4c62-9403-3e02669b7c63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.108679 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.112654 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.112848 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.113107 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.113229 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.113720 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.128717 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c"] Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.203854 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.203914 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.203939 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.204066 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zv2s\" (UniqueName: \"kubernetes.io/projected/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-kube-api-access-5zv2s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.204107 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.306128 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.306221 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.306268 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.306388 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zv2s\" (UniqueName: \"kubernetes.io/projected/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-kube-api-access-5zv2s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.306459 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.313029 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.315439 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.317043 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.325582 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.327436 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zv2s\" (UniqueName: \"kubernetes.io/projected/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-kube-api-access-5zv2s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.450495 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:01:25 crc kubenswrapper[4983]: I1125 21:01:25.622894 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a3f087-c4ef-4047-b9f3-d8b7f5b44456" path="/var/lib/kubelet/pods/34a3f087-c4ef-4047-b9f3-d8b7f5b44456/volumes" Nov 25 21:01:26 crc kubenswrapper[4983]: W1125 21:01:26.067935 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4de4d7c6_ee24_4f8e_97c6_d15a5cd43e90.slice/crio-231a76aa5c493758fab0aba0ed90f25e8b2849eb26b1940bd07a6c0d75cd9f39 WatchSource:0}: Error finding container 231a76aa5c493758fab0aba0ed90f25e8b2849eb26b1940bd07a6c0d75cd9f39: Status 404 returned error can't find the container with id 231a76aa5c493758fab0aba0ed90f25e8b2849eb26b1940bd07a6c0d75cd9f39 Nov 25 21:01:26 crc kubenswrapper[4983]: I1125 21:01:26.070749 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c"] Nov 25 21:01:26 crc kubenswrapper[4983]: I1125 21:01:26.871273 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" event={"ID":"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90","Type":"ContainerStarted","Data":"231a76aa5c493758fab0aba0ed90f25e8b2849eb26b1940bd07a6c0d75cd9f39"} Nov 25 21:01:27 crc kubenswrapper[4983]: I1125 21:01:27.884695 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" event={"ID":"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90","Type":"ContainerStarted","Data":"f4f3802c179dc8f3cf19e9808db6fcdcf8b5dc7827c33d09a878821c40477ebd"} Nov 25 21:01:27 crc kubenswrapper[4983]: I1125 21:01:27.956055 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" podStartSLOduration=2.422693388 podStartE2EDuration="2.956026383s" podCreationTimestamp="2025-11-25 21:01:25 +0000 UTC" firstStartedPulling="2025-11-25 21:01:26.071846426 +0000 UTC m=+2067.184379818" lastFinishedPulling="2025-11-25 21:01:26.605179421 +0000 UTC m=+2067.717712813" observedRunningTime="2025-11-25 21:01:27.9109289 +0000 UTC m=+2069.023462292" watchObservedRunningTime="2025-11-25 21:01:27.956026383 +0000 UTC m=+2069.068559775" Nov 25 21:02:39 crc kubenswrapper[4983]: I1125 21:02:39.928111 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:02:39 crc kubenswrapper[4983]: I1125 21:02:39.928996 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:02:55 crc kubenswrapper[4983]: I1125 21:02:55.980986 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6blwf"] Nov 25 21:02:55 crc kubenswrapper[4983]: I1125 21:02:55.984300 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.017855 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6blwf"] Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.118136 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-utilities\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.118232 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8gps\" (UniqueName: \"kubernetes.io/projected/79c5a817-31c5-4d70-8a68-fb2c081ce7df-kube-api-access-v8gps\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.119156 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-catalog-content\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.220702 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-catalog-content\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.220882 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-utilities\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.220922 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8gps\" (UniqueName: \"kubernetes.io/projected/79c5a817-31c5-4d70-8a68-fb2c081ce7df-kube-api-access-v8gps\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.221362 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-catalog-content\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.221533 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-utilities\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.241701 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8gps\" (UniqueName: \"kubernetes.io/projected/79c5a817-31c5-4d70-8a68-fb2c081ce7df-kube-api-access-v8gps\") pod \"redhat-marketplace-6blwf\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.305921 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:02:56 crc kubenswrapper[4983]: I1125 21:02:56.873380 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6blwf"] Nov 25 21:02:57 crc kubenswrapper[4983]: I1125 21:02:57.229642 4983 generic.go:334] "Generic (PLEG): container finished" podID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerID="97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e" exitCode=0 Nov 25 21:02:57 crc kubenswrapper[4983]: I1125 21:02:57.229768 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6blwf" event={"ID":"79c5a817-31c5-4d70-8a68-fb2c081ce7df","Type":"ContainerDied","Data":"97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e"} Nov 25 21:02:57 crc kubenswrapper[4983]: I1125 21:02:57.230058 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6blwf" event={"ID":"79c5a817-31c5-4d70-8a68-fb2c081ce7df","Type":"ContainerStarted","Data":"09b6f4b814fae738b01e59c8228c055c066ce5c2a215bbc79927141f7e540085"} Nov 25 21:02:58 crc kubenswrapper[4983]: I1125 21:02:58.246272 4983 generic.go:334] "Generic (PLEG): container finished" podID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerID="56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c" exitCode=0 Nov 25 21:02:58 crc kubenswrapper[4983]: I1125 21:02:58.246370 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6blwf" event={"ID":"79c5a817-31c5-4d70-8a68-fb2c081ce7df","Type":"ContainerDied","Data":"56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c"} Nov 25 21:02:59 crc kubenswrapper[4983]: I1125 21:02:59.262383 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6blwf" event={"ID":"79c5a817-31c5-4d70-8a68-fb2c081ce7df","Type":"ContainerStarted","Data":"3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c"} Nov 25 21:02:59 crc kubenswrapper[4983]: I1125 21:02:59.299139 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6blwf" podStartSLOduration=2.86954924 podStartE2EDuration="4.299099214s" podCreationTimestamp="2025-11-25 21:02:55 +0000 UTC" firstStartedPulling="2025-11-25 21:02:57.231986626 +0000 UTC m=+2158.344520048" lastFinishedPulling="2025-11-25 21:02:58.6615366 +0000 UTC m=+2159.774070022" observedRunningTime="2025-11-25 21:02:59.294285256 +0000 UTC m=+2160.406818648" watchObservedRunningTime="2025-11-25 21:02:59.299099214 +0000 UTC m=+2160.411632646" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.157542 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zpmpn"] Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.160179 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.180468 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpmpn"] Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.306068 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkhr6\" (UniqueName: \"kubernetes.io/projected/a1986c28-04d4-4505-ab6a-2a36975fb8f8-kube-api-access-dkhr6\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.306168 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-utilities\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.306203 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-catalog-content\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.408210 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkhr6\" (UniqueName: \"kubernetes.io/projected/a1986c28-04d4-4505-ab6a-2a36975fb8f8-kube-api-access-dkhr6\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.408698 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-utilities\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.408798 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-catalog-content\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.409360 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-utilities\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.409394 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-catalog-content\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.427906 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkhr6\" (UniqueName: \"kubernetes.io/projected/a1986c28-04d4-4505-ab6a-2a36975fb8f8-kube-api-access-dkhr6\") pod \"certified-operators-zpmpn\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:00 crc kubenswrapper[4983]: I1125 21:03:00.493066 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:01 crc kubenswrapper[4983]: I1125 21:03:01.020212 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpmpn"] Nov 25 21:03:01 crc kubenswrapper[4983]: I1125 21:03:01.297486 4983 generic.go:334] "Generic (PLEG): container finished" podID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerID="bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b" exitCode=0 Nov 25 21:03:01 crc kubenswrapper[4983]: I1125 21:03:01.297540 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerDied","Data":"bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b"} Nov 25 21:03:01 crc kubenswrapper[4983]: I1125 21:03:01.297593 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerStarted","Data":"7bb9ebe77369c4d5dee3e9d6424b8df42232bd766d807c49ab2296ad9a0f2a5a"} Nov 25 21:03:02 crc kubenswrapper[4983]: I1125 21:03:02.309290 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerStarted","Data":"b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74"} Nov 25 21:03:03 crc kubenswrapper[4983]: I1125 21:03:03.323729 4983 generic.go:334] "Generic (PLEG): container finished" podID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerID="b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74" exitCode=0 Nov 25 21:03:03 crc kubenswrapper[4983]: I1125 21:03:03.323828 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerDied","Data":"b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74"} Nov 25 21:03:04 crc kubenswrapper[4983]: I1125 21:03:04.345407 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerStarted","Data":"b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88"} Nov 25 21:03:04 crc kubenswrapper[4983]: I1125 21:03:04.385715 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zpmpn" podStartSLOduration=1.765478149 podStartE2EDuration="4.385681087s" podCreationTimestamp="2025-11-25 21:03:00 +0000 UTC" firstStartedPulling="2025-11-25 21:03:01.300436691 +0000 UTC m=+2162.412970093" lastFinishedPulling="2025-11-25 21:03:03.920639599 +0000 UTC m=+2165.033173031" observedRunningTime="2025-11-25 21:03:04.374509701 +0000 UTC m=+2165.487043163" watchObservedRunningTime="2025-11-25 21:03:04.385681087 +0000 UTC m=+2165.498214519" Nov 25 21:03:06 crc kubenswrapper[4983]: I1125 21:03:06.306614 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:03:06 crc kubenswrapper[4983]: I1125 21:03:06.307418 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:03:06 crc kubenswrapper[4983]: I1125 21:03:06.401176 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:03:06 crc kubenswrapper[4983]: I1125 21:03:06.480097 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:03:07 crc kubenswrapper[4983]: I1125 21:03:07.558862 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6blwf"] Nov 25 21:03:08 crc kubenswrapper[4983]: I1125 21:03:08.393720 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6blwf" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="registry-server" containerID="cri-o://3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c" gracePeriod=2 Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.002755 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.105419 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-catalog-content\") pod \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.105551 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-utilities\") pod \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.105642 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8gps\" (UniqueName: \"kubernetes.io/projected/79c5a817-31c5-4d70-8a68-fb2c081ce7df-kube-api-access-v8gps\") pod \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\" (UID: \"79c5a817-31c5-4d70-8a68-fb2c081ce7df\") " Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.108457 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-utilities" (OuterVolumeSpecName: "utilities") pod "79c5a817-31c5-4d70-8a68-fb2c081ce7df" (UID: "79c5a817-31c5-4d70-8a68-fb2c081ce7df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.122673 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c5a817-31c5-4d70-8a68-fb2c081ce7df-kube-api-access-v8gps" (OuterVolumeSpecName: "kube-api-access-v8gps") pod "79c5a817-31c5-4d70-8a68-fb2c081ce7df" (UID: "79c5a817-31c5-4d70-8a68-fb2c081ce7df"). InnerVolumeSpecName "kube-api-access-v8gps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.151255 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79c5a817-31c5-4d70-8a68-fb2c081ce7df" (UID: "79c5a817-31c5-4d70-8a68-fb2c081ce7df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.208762 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.208817 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79c5a817-31c5-4d70-8a68-fb2c081ce7df-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.208838 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8gps\" (UniqueName: \"kubernetes.io/projected/79c5a817-31c5-4d70-8a68-fb2c081ce7df-kube-api-access-v8gps\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.414079 4983 generic.go:334] "Generic (PLEG): container finished" podID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerID="3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c" exitCode=0 Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.414231 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6blwf" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.414268 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6blwf" event={"ID":"79c5a817-31c5-4d70-8a68-fb2c081ce7df","Type":"ContainerDied","Data":"3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c"} Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.415522 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6blwf" event={"ID":"79c5a817-31c5-4d70-8a68-fb2c081ce7df","Type":"ContainerDied","Data":"09b6f4b814fae738b01e59c8228c055c066ce5c2a215bbc79927141f7e540085"} Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.415623 4983 scope.go:117] "RemoveContainer" containerID="3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.455597 4983 scope.go:117] "RemoveContainer" containerID="56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.492929 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6blwf"] Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.499853 4983 scope.go:117] "RemoveContainer" containerID="97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.506964 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6blwf"] Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.542608 4983 scope.go:117] "RemoveContainer" containerID="3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c" Nov 25 21:03:09 crc kubenswrapper[4983]: E1125 21:03:09.543426 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c\": container with ID starting with 3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c not found: ID does not exist" containerID="3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.543460 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c"} err="failed to get container status \"3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c\": rpc error: code = NotFound desc = could not find container \"3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c\": container with ID starting with 3c5886fdc51ba2346bdc5ab49c5320586904959a96061d5ab8e2eb78b2bac28c not found: ID does not exist" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.543480 4983 scope.go:117] "RemoveContainer" containerID="56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c" Nov 25 21:03:09 crc kubenswrapper[4983]: E1125 21:03:09.543975 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c\": container with ID starting with 56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c not found: ID does not exist" containerID="56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.543997 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c"} err="failed to get container status \"56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c\": rpc error: code = NotFound desc = could not find container \"56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c\": container with ID starting with 56925c8ad8fa607285444266b94f236f06a3daf4c285dc61227f71b8e6ed753c not found: ID does not exist" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.544009 4983 scope.go:117] "RemoveContainer" containerID="97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e" Nov 25 21:03:09 crc kubenswrapper[4983]: E1125 21:03:09.544420 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e\": container with ID starting with 97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e not found: ID does not exist" containerID="97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.544440 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e"} err="failed to get container status \"97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e\": rpc error: code = NotFound desc = could not find container \"97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e\": container with ID starting with 97b217937e1603d75c9b6fbd5cf2da3808e27012bbfc5630b67957d8df58dc3e not found: ID does not exist" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.622985 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" path="/var/lib/kubelet/pods/79c5a817-31c5-4d70-8a68-fb2c081ce7df/volumes" Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.928703 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:03:09 crc kubenswrapper[4983]: I1125 21:03:09.928815 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:03:10 crc kubenswrapper[4983]: I1125 21:03:10.493272 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:10 crc kubenswrapper[4983]: I1125 21:03:10.494173 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:10 crc kubenswrapper[4983]: I1125 21:03:10.582067 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:11 crc kubenswrapper[4983]: I1125 21:03:11.548521 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:11 crc kubenswrapper[4983]: I1125 21:03:11.949036 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpmpn"] Nov 25 21:03:13 crc kubenswrapper[4983]: I1125 21:03:13.469914 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zpmpn" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="registry-server" containerID="cri-o://b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88" gracePeriod=2 Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.003503 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.138216 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-catalog-content\") pod \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.138479 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-utilities\") pod \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.138584 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkhr6\" (UniqueName: \"kubernetes.io/projected/a1986c28-04d4-4505-ab6a-2a36975fb8f8-kube-api-access-dkhr6\") pod \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\" (UID: \"a1986c28-04d4-4505-ab6a-2a36975fb8f8\") " Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.139385 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-utilities" (OuterVolumeSpecName: "utilities") pod "a1986c28-04d4-4505-ab6a-2a36975fb8f8" (UID: "a1986c28-04d4-4505-ab6a-2a36975fb8f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.146965 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1986c28-04d4-4505-ab6a-2a36975fb8f8-kube-api-access-dkhr6" (OuterVolumeSpecName: "kube-api-access-dkhr6") pod "a1986c28-04d4-4505-ab6a-2a36975fb8f8" (UID: "a1986c28-04d4-4505-ab6a-2a36975fb8f8"). InnerVolumeSpecName "kube-api-access-dkhr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.204962 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1986c28-04d4-4505-ab6a-2a36975fb8f8" (UID: "a1986c28-04d4-4505-ab6a-2a36975fb8f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.241182 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.241234 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkhr6\" (UniqueName: \"kubernetes.io/projected/a1986c28-04d4-4505-ab6a-2a36975fb8f8-kube-api-access-dkhr6\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.241257 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1986c28-04d4-4505-ab6a-2a36975fb8f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.484411 4983 generic.go:334] "Generic (PLEG): container finished" podID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerID="b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88" exitCode=0 Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.484478 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerDied","Data":"b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88"} Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.484522 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpmpn" event={"ID":"a1986c28-04d4-4505-ab6a-2a36975fb8f8","Type":"ContainerDied","Data":"7bb9ebe77369c4d5dee3e9d6424b8df42232bd766d807c49ab2296ad9a0f2a5a"} Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.484527 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpmpn" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.484584 4983 scope.go:117] "RemoveContainer" containerID="b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.527938 4983 scope.go:117] "RemoveContainer" containerID="b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.538165 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpmpn"] Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.546969 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zpmpn"] Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.582029 4983 scope.go:117] "RemoveContainer" containerID="bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.633034 4983 scope.go:117] "RemoveContainer" containerID="b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88" Nov 25 21:03:14 crc kubenswrapper[4983]: E1125 21:03:14.633824 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88\": container with ID starting with b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88 not found: ID does not exist" containerID="b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.633898 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88"} err="failed to get container status \"b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88\": rpc error: code = NotFound desc = could not find container \"b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88\": container with ID starting with b7c3a3f4d04e435a45c965634ae3e038739310ecb982d9ac273c97cab63bfe88 not found: ID does not exist" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.633941 4983 scope.go:117] "RemoveContainer" containerID="b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74" Nov 25 21:03:14 crc kubenswrapper[4983]: E1125 21:03:14.634718 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74\": container with ID starting with b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74 not found: ID does not exist" containerID="b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.634801 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74"} err="failed to get container status \"b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74\": rpc error: code = NotFound desc = could not find container \"b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74\": container with ID starting with b8ab5157264842cc04ad9d10f3394cd83abc5a13c8f98b86dd36354daa237f74 not found: ID does not exist" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.634845 4983 scope.go:117] "RemoveContainer" containerID="bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b" Nov 25 21:03:14 crc kubenswrapper[4983]: E1125 21:03:14.635354 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b\": container with ID starting with bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b not found: ID does not exist" containerID="bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b" Nov 25 21:03:14 crc kubenswrapper[4983]: I1125 21:03:14.635434 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b"} err="failed to get container status \"bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b\": rpc error: code = NotFound desc = could not find container \"bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b\": container with ID starting with bcbd70db76288ae5be9bcfc2cb092145f2d22042739460bd525e20d93590135b not found: ID does not exist" Nov 25 21:03:15 crc kubenswrapper[4983]: I1125 21:03:15.623478 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" path="/var/lib/kubelet/pods/a1986c28-04d4-4505-ab6a-2a36975fb8f8/volumes" Nov 25 21:03:39 crc kubenswrapper[4983]: I1125 21:03:39.928377 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:03:39 crc kubenswrapper[4983]: I1125 21:03:39.929467 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:03:39 crc kubenswrapper[4983]: I1125 21:03:39.929697 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:03:39 crc kubenswrapper[4983]: I1125 21:03:39.932496 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7cb277ade04156b812a21af907f0408b5b1f6b691577a49cb53b8b6cc26f407f"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:03:39 crc kubenswrapper[4983]: I1125 21:03:39.933016 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://7cb277ade04156b812a21af907f0408b5b1f6b691577a49cb53b8b6cc26f407f" gracePeriod=600 Nov 25 21:03:40 crc kubenswrapper[4983]: I1125 21:03:40.810084 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="7cb277ade04156b812a21af907f0408b5b1f6b691577a49cb53b8b6cc26f407f" exitCode=0 Nov 25 21:03:40 crc kubenswrapper[4983]: I1125 21:03:40.810163 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"7cb277ade04156b812a21af907f0408b5b1f6b691577a49cb53b8b6cc26f407f"} Nov 25 21:03:40 crc kubenswrapper[4983]: I1125 21:03:40.810879 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c"} Nov 25 21:03:40 crc kubenswrapper[4983]: I1125 21:03:40.810908 4983 scope.go:117] "RemoveContainer" containerID="c3f0cca86823f9631ae6dac1981e438a31d2ebf92e7827fc40076d478cc32574" Nov 25 21:06:08 crc kubenswrapper[4983]: I1125 21:06:08.766631 4983 generic.go:334] "Generic (PLEG): container finished" podID="4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" containerID="f4f3802c179dc8f3cf19e9808db6fcdcf8b5dc7827c33d09a878821c40477ebd" exitCode=0 Nov 25 21:06:08 crc kubenswrapper[4983]: I1125 21:06:08.766775 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" event={"ID":"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90","Type":"ContainerDied","Data":"f4f3802c179dc8f3cf19e9808db6fcdcf8b5dc7827c33d09a878821c40477ebd"} Nov 25 21:06:09 crc kubenswrapper[4983]: I1125 21:06:09.927658 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:06:09 crc kubenswrapper[4983]: I1125 21:06:09.928040 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.321042 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.420262 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zv2s\" (UniqueName: \"kubernetes.io/projected/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-kube-api-access-5zv2s\") pod \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.420339 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-secret-0\") pod \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.420402 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-inventory\") pod \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.420498 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-ssh-key\") pod \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.420615 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-combined-ca-bundle\") pod \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\" (UID: \"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90\") " Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.433712 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-kube-api-access-5zv2s" (OuterVolumeSpecName: "kube-api-access-5zv2s") pod "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" (UID: "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90"). InnerVolumeSpecName "kube-api-access-5zv2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.434228 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" (UID: "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.457505 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-inventory" (OuterVolumeSpecName: "inventory") pod "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" (UID: "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.475036 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" (UID: "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.476769 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" (UID: "4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.523985 4983 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.524031 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.524043 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.524058 4983 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.524075 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zv2s\" (UniqueName: \"kubernetes.io/projected/4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90-kube-api-access-5zv2s\") on node \"crc\" DevicePath \"\"" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.802614 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" event={"ID":"4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90","Type":"ContainerDied","Data":"231a76aa5c493758fab0aba0ed90f25e8b2849eb26b1940bd07a6c0d75cd9f39"} Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.803190 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231a76aa5c493758fab0aba0ed90f25e8b2849eb26b1940bd07a6c0d75cd9f39" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.802880 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.946711 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44"] Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947212 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="extract-content" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947226 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="extract-content" Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947243 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="extract-content" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947249 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="extract-content" Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947260 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="extract-utilities" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947268 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="extract-utilities" Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947282 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947289 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947319 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="registry-server" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947325 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="registry-server" Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947337 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="extract-utilities" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947343 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="extract-utilities" Nov 25 21:06:10 crc kubenswrapper[4983]: E1125 21:06:10.947355 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="registry-server" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947360 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="registry-server" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947591 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1986c28-04d4-4505-ab6a-2a36975fb8f8" containerName="registry-server" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947617 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.947626 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c5a817-31c5-4d70-8a68-fb2c081ce7df" containerName="registry-server" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.948496 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.956586 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.967986 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.968063 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.968260 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.968291 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.969223 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.969595 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 21:06:10 crc kubenswrapper[4983]: I1125 21:06:10.988370 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44"] Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044644 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044697 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ftv\" (UniqueName: \"kubernetes.io/projected/7ce9c984-8450-479b-aa5f-58f81943cf56-kube-api-access-j6ftv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044741 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044804 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044852 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044953 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.044976 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.045020 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.045040 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147511 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147579 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6ftv\" (UniqueName: \"kubernetes.io/projected/7ce9c984-8450-479b-aa5f-58f81943cf56-kube-api-access-j6ftv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147632 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147677 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147725 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147762 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.147791 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.148007 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.148704 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.148998 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.152746 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.153576 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.154019 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.155061 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.155674 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.156156 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.156534 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.173946 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6ftv\" (UniqueName: \"kubernetes.io/projected/7ce9c984-8450-479b-aa5f-58f81943cf56-kube-api-access-j6ftv\") pod \"nova-edpm-deployment-openstack-edpm-ipam-b7m44\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.282037 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.697968 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44"] Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.704834 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:06:11 crc kubenswrapper[4983]: I1125 21:06:11.865603 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" event={"ID":"7ce9c984-8450-479b-aa5f-58f81943cf56","Type":"ContainerStarted","Data":"9eda52ac1ebdb7f2002b2873023f313a2f753cc234f622f9b9721b117d7864c2"} Nov 25 21:06:12 crc kubenswrapper[4983]: I1125 21:06:12.882734 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" event={"ID":"7ce9c984-8450-479b-aa5f-58f81943cf56","Type":"ContainerStarted","Data":"5d349dd6229a16641e0e7024fd57f0c4d2c4af7503200587656e6baf403dc63d"} Nov 25 21:06:12 crc kubenswrapper[4983]: I1125 21:06:12.924084 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" podStartSLOduration=2.51259754 podStartE2EDuration="2.924055421s" podCreationTimestamp="2025-11-25 21:06:10 +0000 UTC" firstStartedPulling="2025-11-25 21:06:11.704516601 +0000 UTC m=+2352.817049993" lastFinishedPulling="2025-11-25 21:06:12.115974482 +0000 UTC m=+2353.228507874" observedRunningTime="2025-11-25 21:06:12.90858887 +0000 UTC m=+2354.021122302" watchObservedRunningTime="2025-11-25 21:06:12.924055421 +0000 UTC m=+2354.036588823" Nov 25 21:06:39 crc kubenswrapper[4983]: I1125 21:06:39.928367 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:06:39 crc kubenswrapper[4983]: I1125 21:06:39.929586 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.505797 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kkmc9"] Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.509462 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.536073 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkmc9"] Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.633939 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-catalog-content\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.634613 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8mm4\" (UniqueName: \"kubernetes.io/projected/f90a0840-3335-4eb1-8989-635479c76c31-kube-api-access-b8mm4\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.634789 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-utilities\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.736360 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-utilities\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.736935 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-utilities\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.737231 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-catalog-content\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.736817 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-catalog-content\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.738476 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8mm4\" (UniqueName: \"kubernetes.io/projected/f90a0840-3335-4eb1-8989-635479c76c31-kube-api-access-b8mm4\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.760495 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8mm4\" (UniqueName: \"kubernetes.io/projected/f90a0840-3335-4eb1-8989-635479c76c31-kube-api-access-b8mm4\") pod \"redhat-operators-kkmc9\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:52 crc kubenswrapper[4983]: I1125 21:06:52.840987 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:06:53 crc kubenswrapper[4983]: W1125 21:06:53.366448 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf90a0840_3335_4eb1_8989_635479c76c31.slice/crio-86b46b96ab0c0409683079b7faae1bb3e08d24b677f56f61acc1f2eedbc1e46f WatchSource:0}: Error finding container 86b46b96ab0c0409683079b7faae1bb3e08d24b677f56f61acc1f2eedbc1e46f: Status 404 returned error can't find the container with id 86b46b96ab0c0409683079b7faae1bb3e08d24b677f56f61acc1f2eedbc1e46f Nov 25 21:06:53 crc kubenswrapper[4983]: I1125 21:06:53.368665 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kkmc9"] Nov 25 21:06:53 crc kubenswrapper[4983]: I1125 21:06:53.413396 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerStarted","Data":"86b46b96ab0c0409683079b7faae1bb3e08d24b677f56f61acc1f2eedbc1e46f"} Nov 25 21:06:54 crc kubenswrapper[4983]: I1125 21:06:54.429221 4983 generic.go:334] "Generic (PLEG): container finished" podID="f90a0840-3335-4eb1-8989-635479c76c31" containerID="93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11" exitCode=0 Nov 25 21:06:54 crc kubenswrapper[4983]: I1125 21:06:54.429352 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerDied","Data":"93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11"} Nov 25 21:06:55 crc kubenswrapper[4983]: I1125 21:06:55.441936 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerStarted","Data":"076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7"} Nov 25 21:06:56 crc kubenswrapper[4983]: I1125 21:06:56.478089 4983 generic.go:334] "Generic (PLEG): container finished" podID="f90a0840-3335-4eb1-8989-635479c76c31" containerID="076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7" exitCode=0 Nov 25 21:06:56 crc kubenswrapper[4983]: I1125 21:06:56.478156 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerDied","Data":"076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7"} Nov 25 21:06:57 crc kubenswrapper[4983]: I1125 21:06:57.491572 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerStarted","Data":"ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3"} Nov 25 21:06:57 crc kubenswrapper[4983]: I1125 21:06:57.521598 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kkmc9" podStartSLOduration=3.00737753 podStartE2EDuration="5.521580243s" podCreationTimestamp="2025-11-25 21:06:52 +0000 UTC" firstStartedPulling="2025-11-25 21:06:54.431317211 +0000 UTC m=+2395.543850613" lastFinishedPulling="2025-11-25 21:06:56.945519924 +0000 UTC m=+2398.058053326" observedRunningTime="2025-11-25 21:06:57.520275898 +0000 UTC m=+2398.632809290" watchObservedRunningTime="2025-11-25 21:06:57.521580243 +0000 UTC m=+2398.634113635" Nov 25 21:07:02 crc kubenswrapper[4983]: I1125 21:07:02.842074 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:07:02 crc kubenswrapper[4983]: I1125 21:07:02.842759 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:07:02 crc kubenswrapper[4983]: I1125 21:07:02.907172 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:07:03 crc kubenswrapper[4983]: I1125 21:07:03.645331 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:07:03 crc kubenswrapper[4983]: I1125 21:07:03.739264 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkmc9"] Nov 25 21:07:05 crc kubenswrapper[4983]: I1125 21:07:05.583630 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kkmc9" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="registry-server" containerID="cri-o://ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3" gracePeriod=2 Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.072101 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.156413 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-utilities\") pod \"f90a0840-3335-4eb1-8989-635479c76c31\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.156502 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8mm4\" (UniqueName: \"kubernetes.io/projected/f90a0840-3335-4eb1-8989-635479c76c31-kube-api-access-b8mm4\") pod \"f90a0840-3335-4eb1-8989-635479c76c31\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.156667 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-catalog-content\") pod \"f90a0840-3335-4eb1-8989-635479c76c31\" (UID: \"f90a0840-3335-4eb1-8989-635479c76c31\") " Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.171430 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-utilities" (OuterVolumeSpecName: "utilities") pod "f90a0840-3335-4eb1-8989-635479c76c31" (UID: "f90a0840-3335-4eb1-8989-635479c76c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.185904 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90a0840-3335-4eb1-8989-635479c76c31-kube-api-access-b8mm4" (OuterVolumeSpecName: "kube-api-access-b8mm4") pod "f90a0840-3335-4eb1-8989-635479c76c31" (UID: "f90a0840-3335-4eb1-8989-635479c76c31"). InnerVolumeSpecName "kube-api-access-b8mm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.259118 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.259153 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8mm4\" (UniqueName: \"kubernetes.io/projected/f90a0840-3335-4eb1-8989-635479c76c31-kube-api-access-b8mm4\") on node \"crc\" DevicePath \"\"" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.624033 4983 generic.go:334] "Generic (PLEG): container finished" podID="f90a0840-3335-4eb1-8989-635479c76c31" containerID="ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3" exitCode=0 Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.624109 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kkmc9" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.624122 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerDied","Data":"ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3"} Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.624609 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kkmc9" event={"ID":"f90a0840-3335-4eb1-8989-635479c76c31","Type":"ContainerDied","Data":"86b46b96ab0c0409683079b7faae1bb3e08d24b677f56f61acc1f2eedbc1e46f"} Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.624642 4983 scope.go:117] "RemoveContainer" containerID="ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.654079 4983 scope.go:117] "RemoveContainer" containerID="076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.682022 4983 scope.go:117] "RemoveContainer" containerID="93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.763734 4983 scope.go:117] "RemoveContainer" containerID="ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3" Nov 25 21:07:06 crc kubenswrapper[4983]: E1125 21:07:06.764413 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3\": container with ID starting with ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3 not found: ID does not exist" containerID="ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.764487 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3"} err="failed to get container status \"ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3\": rpc error: code = NotFound desc = could not find container \"ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3\": container with ID starting with ee773880e263b5878038f1b405b6c9353005b362ce4ac81e0894fe798677d6a3 not found: ID does not exist" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.764538 4983 scope.go:117] "RemoveContainer" containerID="076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7" Nov 25 21:07:06 crc kubenswrapper[4983]: E1125 21:07:06.765384 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7\": container with ID starting with 076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7 not found: ID does not exist" containerID="076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.765438 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7"} err="failed to get container status \"076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7\": rpc error: code = NotFound desc = could not find container \"076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7\": container with ID starting with 076f5b178040ae3864b1331cc5e5d7cbe12081608e7f1e819ca058035de587a7 not found: ID does not exist" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.765459 4983 scope.go:117] "RemoveContainer" containerID="93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11" Nov 25 21:07:06 crc kubenswrapper[4983]: E1125 21:07:06.766119 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11\": container with ID starting with 93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11 not found: ID does not exist" containerID="93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11" Nov 25 21:07:06 crc kubenswrapper[4983]: I1125 21:07:06.766178 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11"} err="failed to get container status \"93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11\": rpc error: code = NotFound desc = could not find container \"93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11\": container with ID starting with 93898dd12516204ccc34a8e0231710ba477f7bd9bc8392bcb19af5f638b78c11 not found: ID does not exist" Nov 25 21:07:07 crc kubenswrapper[4983]: I1125 21:07:07.438651 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f90a0840-3335-4eb1-8989-635479c76c31" (UID: "f90a0840-3335-4eb1-8989-635479c76c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:07:07 crc kubenswrapper[4983]: I1125 21:07:07.485115 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f90a0840-3335-4eb1-8989-635479c76c31-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:07:07 crc kubenswrapper[4983]: I1125 21:07:07.581459 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kkmc9"] Nov 25 21:07:07 crc kubenswrapper[4983]: I1125 21:07:07.595154 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kkmc9"] Nov 25 21:07:07 crc kubenswrapper[4983]: I1125 21:07:07.623413 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90a0840-3335-4eb1-8989-635479c76c31" path="/var/lib/kubelet/pods/f90a0840-3335-4eb1-8989-635479c76c31/volumes" Nov 25 21:07:09 crc kubenswrapper[4983]: I1125 21:07:09.928623 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:07:09 crc kubenswrapper[4983]: I1125 21:07:09.929058 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:07:09 crc kubenswrapper[4983]: I1125 21:07:09.929126 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:07:09 crc kubenswrapper[4983]: I1125 21:07:09.930146 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:07:09 crc kubenswrapper[4983]: I1125 21:07:09.930240 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" gracePeriod=600 Nov 25 21:07:10 crc kubenswrapper[4983]: E1125 21:07:10.064025 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:07:10 crc kubenswrapper[4983]: I1125 21:07:10.680335 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" exitCode=0 Nov 25 21:07:10 crc kubenswrapper[4983]: I1125 21:07:10.680435 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c"} Nov 25 21:07:10 crc kubenswrapper[4983]: I1125 21:07:10.680526 4983 scope.go:117] "RemoveContainer" containerID="7cb277ade04156b812a21af907f0408b5b1f6b691577a49cb53b8b6cc26f407f" Nov 25 21:07:10 crc kubenswrapper[4983]: I1125 21:07:10.682069 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:07:10 crc kubenswrapper[4983]: E1125 21:07:10.683691 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:07:22 crc kubenswrapper[4983]: I1125 21:07:22.605332 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:07:22 crc kubenswrapper[4983]: E1125 21:07:22.606168 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:07:25 crc kubenswrapper[4983]: I1125 21:07:25.083971 4983 scope.go:117] "RemoveContainer" containerID="57549843be299ffec533e4ed1257a8b9a19e1e59595efe548b618735692841d8" Nov 25 21:07:25 crc kubenswrapper[4983]: I1125 21:07:25.123984 4983 scope.go:117] "RemoveContainer" containerID="349861ae95957106301b82da40bf16290b98ab436159cedfd2a6b3dea983118c" Nov 25 21:07:25 crc kubenswrapper[4983]: I1125 21:07:25.183179 4983 scope.go:117] "RemoveContainer" containerID="49703e1c471e4d18ddb1864f5890a6d631faa27d1f9697ca215a04818b365ea0" Nov 25 21:07:37 crc kubenswrapper[4983]: I1125 21:07:37.606059 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:07:37 crc kubenswrapper[4983]: E1125 21:07:37.607197 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:07:50 crc kubenswrapper[4983]: I1125 21:07:50.604871 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:07:50 crc kubenswrapper[4983]: E1125 21:07:50.605942 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:08:04 crc kubenswrapper[4983]: I1125 21:08:04.605773 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:08:04 crc kubenswrapper[4983]: E1125 21:08:04.608139 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:08:17 crc kubenswrapper[4983]: I1125 21:08:17.605874 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:08:17 crc kubenswrapper[4983]: E1125 21:08:17.609000 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:08:32 crc kubenswrapper[4983]: I1125 21:08:32.604665 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:08:32 crc kubenswrapper[4983]: E1125 21:08:32.605342 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:08:46 crc kubenswrapper[4983]: I1125 21:08:46.606011 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:08:46 crc kubenswrapper[4983]: E1125 21:08:46.607351 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:08:58 crc kubenswrapper[4983]: I1125 21:08:58.606145 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:08:58 crc kubenswrapper[4983]: E1125 21:08:58.607759 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:09:12 crc kubenswrapper[4983]: I1125 21:09:12.606029 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:09:12 crc kubenswrapper[4983]: E1125 21:09:12.608971 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:09:23 crc kubenswrapper[4983]: I1125 21:09:23.605787 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:09:23 crc kubenswrapper[4983]: E1125 21:09:23.607175 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:09:25 crc kubenswrapper[4983]: I1125 21:09:25.340619 4983 generic.go:334] "Generic (PLEG): container finished" podID="7ce9c984-8450-479b-aa5f-58f81943cf56" containerID="5d349dd6229a16641e0e7024fd57f0c4d2c4af7503200587656e6baf403dc63d" exitCode=0 Nov 25 21:09:25 crc kubenswrapper[4983]: I1125 21:09:25.340695 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" event={"ID":"7ce9c984-8450-479b-aa5f-58f81943cf56","Type":"ContainerDied","Data":"5d349dd6229a16641e0e7024fd57f0c4d2c4af7503200587656e6baf403dc63d"} Nov 25 21:09:26 crc kubenswrapper[4983]: I1125 21:09:26.873457 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060341 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-0\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060385 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-inventory\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060403 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6ftv\" (UniqueName: \"kubernetes.io/projected/7ce9c984-8450-479b-aa5f-58f81943cf56-kube-api-access-j6ftv\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060520 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-ssh-key\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060548 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-0\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060738 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-combined-ca-bundle\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060795 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-extra-config-0\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060824 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-1\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.060872 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-1\") pod \"7ce9c984-8450-479b-aa5f-58f81943cf56\" (UID: \"7ce9c984-8450-479b-aa5f-58f81943cf56\") " Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.069123 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce9c984-8450-479b-aa5f-58f81943cf56-kube-api-access-j6ftv" (OuterVolumeSpecName: "kube-api-access-j6ftv") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "kube-api-access-j6ftv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.070416 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.097032 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.097665 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.105788 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.112932 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.116900 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.119240 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.124453 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-inventory" (OuterVolumeSpecName: "inventory") pod "7ce9c984-8450-479b-aa5f-58f81943cf56" (UID: "7ce9c984-8450-479b-aa5f-58f81943cf56"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.163705 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164207 4983 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164230 4983 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164244 4983 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164256 4983 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164268 4983 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164281 4983 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164294 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ce9c984-8450-479b-aa5f-58f81943cf56-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.164306 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6ftv\" (UniqueName: \"kubernetes.io/projected/7ce9c984-8450-479b-aa5f-58f81943cf56-kube-api-access-j6ftv\") on node \"crc\" DevicePath \"\"" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.380176 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" event={"ID":"7ce9c984-8450-479b-aa5f-58f81943cf56","Type":"ContainerDied","Data":"9eda52ac1ebdb7f2002b2873023f313a2f753cc234f622f9b9721b117d7864c2"} Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.380279 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-b7m44" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.380284 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eda52ac1ebdb7f2002b2873023f313a2f753cc234f622f9b9721b117d7864c2" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500089 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm"] Nov 25 21:09:27 crc kubenswrapper[4983]: E1125 21:09:27.500482 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="extract-utilities" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500495 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="extract-utilities" Nov 25 21:09:27 crc kubenswrapper[4983]: E1125 21:09:27.500511 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="registry-server" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500518 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="registry-server" Nov 25 21:09:27 crc kubenswrapper[4983]: E1125 21:09:27.500531 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce9c984-8450-479b-aa5f-58f81943cf56" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500538 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce9c984-8450-479b-aa5f-58f81943cf56" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 21:09:27 crc kubenswrapper[4983]: E1125 21:09:27.500685 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="extract-content" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500695 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="extract-content" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500906 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90a0840-3335-4eb1-8989-635479c76c31" containerName="registry-server" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.500929 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce9c984-8450-479b-aa5f-58f81943cf56" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.501537 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.505530 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.505992 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-z7jl6" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.506044 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.509270 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.511071 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.525776 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm"] Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.680920 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.681408 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.681526 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8kzh\" (UniqueName: \"kubernetes.io/projected/34445193-9a8d-4ebd-ac42-d8348c11e375-kube-api-access-m8kzh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.681849 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.681876 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.681920 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.681964 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.783786 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.783867 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.783968 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.784024 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.784047 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8kzh\" (UniqueName: \"kubernetes.io/projected/34445193-9a8d-4ebd-ac42-d8348c11e375-kube-api-access-m8kzh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.784102 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.784125 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.790071 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.792353 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.792934 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.796481 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.803140 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.803712 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.810675 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8kzh\" (UniqueName: \"kubernetes.io/projected/34445193-9a8d-4ebd-ac42-d8348c11e375-kube-api-access-m8kzh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:27 crc kubenswrapper[4983]: I1125 21:09:27.822391 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:09:28 crc kubenswrapper[4983]: I1125 21:09:28.524477 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm"] Nov 25 21:09:29 crc kubenswrapper[4983]: I1125 21:09:29.403984 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" event={"ID":"34445193-9a8d-4ebd-ac42-d8348c11e375","Type":"ContainerStarted","Data":"9321a54f31c276128ca3cf55a9b1306a88ac21e814295c16365d9c9f469099c2"} Nov 25 21:09:29 crc kubenswrapper[4983]: I1125 21:09:29.404712 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" event={"ID":"34445193-9a8d-4ebd-ac42-d8348c11e375","Type":"ContainerStarted","Data":"4c27fd5e4b3d99d1d38a50b39d4ea29180b80d6c2cfe20f468ea87fd9152151d"} Nov 25 21:09:29 crc kubenswrapper[4983]: I1125 21:09:29.439414 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" podStartSLOduration=1.983828325 podStartE2EDuration="2.439394927s" podCreationTimestamp="2025-11-25 21:09:27 +0000 UTC" firstStartedPulling="2025-11-25 21:09:28.519784658 +0000 UTC m=+2549.632318060" lastFinishedPulling="2025-11-25 21:09:28.97535124 +0000 UTC m=+2550.087884662" observedRunningTime="2025-11-25 21:09:29.428045816 +0000 UTC m=+2550.540579238" watchObservedRunningTime="2025-11-25 21:09:29.439394927 +0000 UTC m=+2550.551928319" Nov 25 21:09:36 crc kubenswrapper[4983]: I1125 21:09:36.606151 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:09:36 crc kubenswrapper[4983]: E1125 21:09:36.608705 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:09:49 crc kubenswrapper[4983]: I1125 21:09:49.631545 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:09:49 crc kubenswrapper[4983]: E1125 21:09:49.633705 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:10:01 crc kubenswrapper[4983]: I1125 21:10:01.605124 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:10:01 crc kubenswrapper[4983]: E1125 21:10:01.606337 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:10:12 crc kubenswrapper[4983]: I1125 21:10:12.605297 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:10:12 crc kubenswrapper[4983]: E1125 21:10:12.606518 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:10:25 crc kubenswrapper[4983]: I1125 21:10:25.605718 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:10:25 crc kubenswrapper[4983]: E1125 21:10:25.606732 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:10:40 crc kubenswrapper[4983]: I1125 21:10:40.606614 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:10:40 crc kubenswrapper[4983]: E1125 21:10:40.607684 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:10:51 crc kubenswrapper[4983]: I1125 21:10:51.606526 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:10:51 crc kubenswrapper[4983]: E1125 21:10:51.608895 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:11:03 crc kubenswrapper[4983]: I1125 21:11:03.605607 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:11:03 crc kubenswrapper[4983]: E1125 21:11:03.606298 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:11:14 crc kubenswrapper[4983]: I1125 21:11:14.605947 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:11:14 crc kubenswrapper[4983]: E1125 21:11:14.606830 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:11:27 crc kubenswrapper[4983]: I1125 21:11:27.606241 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:11:27 crc kubenswrapper[4983]: E1125 21:11:27.607351 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:11:38 crc kubenswrapper[4983]: I1125 21:11:38.605854 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:11:38 crc kubenswrapper[4983]: E1125 21:11:38.607209 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:11:50 crc kubenswrapper[4983]: I1125 21:11:50.605370 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:11:50 crc kubenswrapper[4983]: E1125 21:11:50.606716 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:12:05 crc kubenswrapper[4983]: I1125 21:12:05.605229 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:12:05 crc kubenswrapper[4983]: E1125 21:12:05.606498 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:12:14 crc kubenswrapper[4983]: I1125 21:12:14.514132 4983 generic.go:334] "Generic (PLEG): container finished" podID="34445193-9a8d-4ebd-ac42-d8348c11e375" containerID="9321a54f31c276128ca3cf55a9b1306a88ac21e814295c16365d9c9f469099c2" exitCode=0 Nov 25 21:12:14 crc kubenswrapper[4983]: I1125 21:12:14.514208 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" event={"ID":"34445193-9a8d-4ebd-ac42-d8348c11e375","Type":"ContainerDied","Data":"9321a54f31c276128ca3cf55a9b1306a88ac21e814295c16365d9c9f469099c2"} Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.019054 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.111745 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ssh-key\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.111844 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-2\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.111897 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-telemetry-combined-ca-bundle\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.111954 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-inventory\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.111975 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-1\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.111995 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8kzh\" (UniqueName: \"kubernetes.io/projected/34445193-9a8d-4ebd-ac42-d8348c11e375-kube-api-access-m8kzh\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.112042 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-0\") pod \"34445193-9a8d-4ebd-ac42-d8348c11e375\" (UID: \"34445193-9a8d-4ebd-ac42-d8348c11e375\") " Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.119139 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34445193-9a8d-4ebd-ac42-d8348c11e375-kube-api-access-m8kzh" (OuterVolumeSpecName: "kube-api-access-m8kzh") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "kube-api-access-m8kzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.120802 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.145257 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.149032 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.151001 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.164277 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-inventory" (OuterVolumeSpecName: "inventory") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.170422 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34445193-9a8d-4ebd-ac42-d8348c11e375" (UID: "34445193-9a8d-4ebd-ac42-d8348c11e375"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214673 4983 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214724 4983 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214748 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8kzh\" (UniqueName: \"kubernetes.io/projected/34445193-9a8d-4ebd-ac42-d8348c11e375-kube-api-access-m8kzh\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214770 4983 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214789 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214807 4983 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.214826 4983 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34445193-9a8d-4ebd-ac42-d8348c11e375-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.541103 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" event={"ID":"34445193-9a8d-4ebd-ac42-d8348c11e375","Type":"ContainerDied","Data":"4c27fd5e4b3d99d1d38a50b39d4ea29180b80d6c2cfe20f468ea87fd9152151d"} Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.541167 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c27fd5e4b3d99d1d38a50b39d4ea29180b80d6c2cfe20f468ea87fd9152151d" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.541179 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm" Nov 25 21:12:16 crc kubenswrapper[4983]: I1125 21:12:16.606339 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:12:17 crc kubenswrapper[4983]: I1125 21:12:17.561168 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2"} Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.626649 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmxdg"] Nov 25 21:12:23 crc kubenswrapper[4983]: E1125 21:12:23.627841 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34445193-9a8d-4ebd-ac42-d8348c11e375" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.627860 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="34445193-9a8d-4ebd-ac42-d8348c11e375" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.628163 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="34445193-9a8d-4ebd-ac42-d8348c11e375" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.629838 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmxdg"] Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.629946 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.723420 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrp4g\" (UniqueName: \"kubernetes.io/projected/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-kube-api-access-mrp4g\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.723509 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-catalog-content\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.723630 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-utilities\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.825641 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-utilities\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.825948 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrp4g\" (UniqueName: \"kubernetes.io/projected/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-kube-api-access-mrp4g\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.826088 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-catalog-content\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.826189 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-utilities\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.826624 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-catalog-content\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.856366 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrp4g\" (UniqueName: \"kubernetes.io/projected/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-kube-api-access-mrp4g\") pod \"community-operators-gmxdg\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:23 crc kubenswrapper[4983]: I1125 21:12:23.963979 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:24 crc kubenswrapper[4983]: I1125 21:12:24.464067 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmxdg"] Nov 25 21:12:24 crc kubenswrapper[4983]: I1125 21:12:24.641758 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxdg" event={"ID":"5f71ac6f-729c-48ff-b63c-6f869a40c6e9","Type":"ContainerStarted","Data":"59ee489b13022cfdeab6f3950a697393feb63970a51a9b24f8c50f41e178668b"} Nov 25 21:12:25 crc kubenswrapper[4983]: I1125 21:12:25.654887 4983 generic.go:334] "Generic (PLEG): container finished" podID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerID="df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99" exitCode=0 Nov 25 21:12:25 crc kubenswrapper[4983]: I1125 21:12:25.654984 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxdg" event={"ID":"5f71ac6f-729c-48ff-b63c-6f869a40c6e9","Type":"ContainerDied","Data":"df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99"} Nov 25 21:12:25 crc kubenswrapper[4983]: I1125 21:12:25.657800 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:12:27 crc kubenswrapper[4983]: I1125 21:12:27.682601 4983 generic.go:334] "Generic (PLEG): container finished" podID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerID="f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf" exitCode=0 Nov 25 21:12:27 crc kubenswrapper[4983]: I1125 21:12:27.682723 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxdg" event={"ID":"5f71ac6f-729c-48ff-b63c-6f869a40c6e9","Type":"ContainerDied","Data":"f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf"} Nov 25 21:12:28 crc kubenswrapper[4983]: I1125 21:12:28.699162 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxdg" event={"ID":"5f71ac6f-729c-48ff-b63c-6f869a40c6e9","Type":"ContainerStarted","Data":"13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce"} Nov 25 21:12:28 crc kubenswrapper[4983]: I1125 21:12:28.718976 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmxdg" podStartSLOduration=3.200647112 podStartE2EDuration="5.718959913s" podCreationTimestamp="2025-11-25 21:12:23 +0000 UTC" firstStartedPulling="2025-11-25 21:12:25.65729624 +0000 UTC m=+2726.769829672" lastFinishedPulling="2025-11-25 21:12:28.175609061 +0000 UTC m=+2729.288142473" observedRunningTime="2025-11-25 21:12:28.71772184 +0000 UTC m=+2729.830255252" watchObservedRunningTime="2025-11-25 21:12:28.718959913 +0000 UTC m=+2729.831493305" Nov 25 21:12:33 crc kubenswrapper[4983]: I1125 21:12:33.964639 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:33 crc kubenswrapper[4983]: I1125 21:12:33.965601 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:34 crc kubenswrapper[4983]: I1125 21:12:34.058400 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:34 crc kubenswrapper[4983]: I1125 21:12:34.885788 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:34 crc kubenswrapper[4983]: I1125 21:12:34.947444 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmxdg"] Nov 25 21:12:36 crc kubenswrapper[4983]: I1125 21:12:36.823258 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmxdg" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="registry-server" containerID="cri-o://13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce" gracePeriod=2 Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.436416 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.474628 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrp4g\" (UniqueName: \"kubernetes.io/projected/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-kube-api-access-mrp4g\") pod \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.474751 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-catalog-content\") pod \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.474857 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-utilities\") pod \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\" (UID: \"5f71ac6f-729c-48ff-b63c-6f869a40c6e9\") " Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.476858 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-utilities" (OuterVolumeSpecName: "utilities") pod "5f71ac6f-729c-48ff-b63c-6f869a40c6e9" (UID: "5f71ac6f-729c-48ff-b63c-6f869a40c6e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.481343 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-kube-api-access-mrp4g" (OuterVolumeSpecName: "kube-api-access-mrp4g") pod "5f71ac6f-729c-48ff-b63c-6f869a40c6e9" (UID: "5f71ac6f-729c-48ff-b63c-6f869a40c6e9"). InnerVolumeSpecName "kube-api-access-mrp4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.543836 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f71ac6f-729c-48ff-b63c-6f869a40c6e9" (UID: "5f71ac6f-729c-48ff-b63c-6f869a40c6e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.577831 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrp4g\" (UniqueName: \"kubernetes.io/projected/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-kube-api-access-mrp4g\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.577874 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.577889 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f71ac6f-729c-48ff-b63c-6f869a40c6e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.838651 4983 generic.go:334] "Generic (PLEG): container finished" podID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerID="13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce" exitCode=0 Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.838706 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxdg" event={"ID":"5f71ac6f-729c-48ff-b63c-6f869a40c6e9","Type":"ContainerDied","Data":"13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce"} Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.838742 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmxdg" event={"ID":"5f71ac6f-729c-48ff-b63c-6f869a40c6e9","Type":"ContainerDied","Data":"59ee489b13022cfdeab6f3950a697393feb63970a51a9b24f8c50f41e178668b"} Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.838764 4983 scope.go:117] "RemoveContainer" containerID="13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.838761 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmxdg" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.866206 4983 scope.go:117] "RemoveContainer" containerID="f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.868905 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmxdg"] Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.881904 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmxdg"] Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.913533 4983 scope.go:117] "RemoveContainer" containerID="df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.949632 4983 scope.go:117] "RemoveContainer" containerID="13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce" Nov 25 21:12:37 crc kubenswrapper[4983]: E1125 21:12:37.950186 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce\": container with ID starting with 13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce not found: ID does not exist" containerID="13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.950221 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce"} err="failed to get container status \"13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce\": rpc error: code = NotFound desc = could not find container \"13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce\": container with ID starting with 13a71417c204b5e570b3dd43effb8a985fe5f87c22c292b103ffdaa7cc0ee0ce not found: ID does not exist" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.950246 4983 scope.go:117] "RemoveContainer" containerID="f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf" Nov 25 21:12:37 crc kubenswrapper[4983]: E1125 21:12:37.950627 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf\": container with ID starting with f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf not found: ID does not exist" containerID="f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.950656 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf"} err="failed to get container status \"f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf\": rpc error: code = NotFound desc = could not find container \"f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf\": container with ID starting with f2721fb2a702feb2dbc22ecae61bdc0f955a820a4c182f75dff39be81f1f3bcf not found: ID does not exist" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.950674 4983 scope.go:117] "RemoveContainer" containerID="df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99" Nov 25 21:12:37 crc kubenswrapper[4983]: E1125 21:12:37.951019 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99\": container with ID starting with df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99 not found: ID does not exist" containerID="df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99" Nov 25 21:12:37 crc kubenswrapper[4983]: I1125 21:12:37.951045 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99"} err="failed to get container status \"df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99\": rpc error: code = NotFound desc = could not find container \"df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99\": container with ID starting with df784b57f4de6f076d124710f96ef6df11284590f3837b5159003747cd5fad99 not found: ID does not exist" Nov 25 21:12:39 crc kubenswrapper[4983]: I1125 21:12:39.633345 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" path="/var/lib/kubelet/pods/5f71ac6f-729c-48ff-b63c-6f869a40c6e9/volumes" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.745200 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 21:13:03 crc kubenswrapper[4983]: E1125 21:13:03.746661 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="registry-server" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.746686 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="registry-server" Nov 25 21:13:03 crc kubenswrapper[4983]: E1125 21:13:03.746733 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="extract-content" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.746746 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="extract-content" Nov 25 21:13:03 crc kubenswrapper[4983]: E1125 21:13:03.746799 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="extract-utilities" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.746813 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="extract-utilities" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.747232 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f71ac6f-729c-48ff-b63c-6f869a40c6e9" containerName="registry-server" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.748278 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.751667 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.751964 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dzhtx" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.752119 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.752590 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.757199 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.783513 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.783666 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5zk\" (UniqueName: \"kubernetes.io/projected/66868750-3f73-47fe-a353-f88441e69915-kube-api-access-wx5zk\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.783759 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.783800 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.783833 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.783953 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.784100 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.784286 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-config-data\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.784391 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886242 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-config-data\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886320 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886362 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886424 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5zk\" (UniqueName: \"kubernetes.io/projected/66868750-3f73-47fe-a353-f88441e69915-kube-api-access-wx5zk\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886481 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886507 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886527 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886583 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.886631 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.887457 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.887486 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.887767 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.888665 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-config-data\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.888740 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.896290 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.897410 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.898883 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.911470 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5zk\" (UniqueName: \"kubernetes.io/projected/66868750-3f73-47fe-a353-f88441e69915-kube-api-access-wx5zk\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:03 crc kubenswrapper[4983]: I1125 21:13:03.923146 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " pod="openstack/tempest-tests-tempest" Nov 25 21:13:04 crc kubenswrapper[4983]: I1125 21:13:04.071881 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 21:13:04 crc kubenswrapper[4983]: I1125 21:13:04.556202 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 21:13:05 crc kubenswrapper[4983]: I1125 21:13:05.177696 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"66868750-3f73-47fe-a353-f88441e69915","Type":"ContainerStarted","Data":"bfb77d05266bef34788572078867ecd5fec1c371b52660399fd6bf36049835c7"} Nov 25 21:13:37 crc kubenswrapper[4983]: E1125 21:13:37.994138 4983 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 25 21:13:37 crc kubenswrapper[4983]: E1125 21:13:37.995015 4983 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx5zk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(66868750-3f73-47fe-a353-f88441e69915): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 21:13:37 crc kubenswrapper[4983]: E1125 21:13:37.996284 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="66868750-3f73-47fe-a353-f88441e69915" Nov 25 21:13:38 crc kubenswrapper[4983]: E1125 21:13:38.570091 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="66868750-3f73-47fe-a353-f88441e69915" Nov 25 21:13:53 crc kubenswrapper[4983]: I1125 21:13:53.733283 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"66868750-3f73-47fe-a353-f88441e69915","Type":"ContainerStarted","Data":"aef8ffa0ea1f9ff5c55df9a9130e33fd26a9120f834136737f54dfea63627dcc"} Nov 25 21:13:53 crc kubenswrapper[4983]: I1125 21:13:53.759665 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.19826064 podStartE2EDuration="51.75964608s" podCreationTimestamp="2025-11-25 21:13:02 +0000 UTC" firstStartedPulling="2025-11-25 21:13:04.567824846 +0000 UTC m=+2765.680358238" lastFinishedPulling="2025-11-25 21:13:52.129210246 +0000 UTC m=+2813.241743678" observedRunningTime="2025-11-25 21:13:53.754778961 +0000 UTC m=+2814.867312363" watchObservedRunningTime="2025-11-25 21:13:53.75964608 +0000 UTC m=+2814.872179482" Nov 25 21:14:39 crc kubenswrapper[4983]: I1125 21:14:39.927863 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:14:39 crc kubenswrapper[4983]: I1125 21:14:39.928523 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.186719 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g"] Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.189376 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.191612 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/176fef29-5595-40fb-90dd-c58f2af9c5a0-secret-volume\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.191780 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2bjm\" (UniqueName: \"kubernetes.io/projected/176fef29-5595-40fb-90dd-c58f2af9c5a0-kube-api-access-z2bjm\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.191860 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176fef29-5595-40fb-90dd-c58f2af9c5a0-config-volume\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.201458 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g"] Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.220107 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.220112 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.293906 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176fef29-5595-40fb-90dd-c58f2af9c5a0-config-volume\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.294324 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/176fef29-5595-40fb-90dd-c58f2af9c5a0-secret-volume\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.294490 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2bjm\" (UniqueName: \"kubernetes.io/projected/176fef29-5595-40fb-90dd-c58f2af9c5a0-kube-api-access-z2bjm\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.296210 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176fef29-5595-40fb-90dd-c58f2af9c5a0-config-volume\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.304875 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/176fef29-5595-40fb-90dd-c58f2af9c5a0-secret-volume\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.315443 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2bjm\" (UniqueName: \"kubernetes.io/projected/176fef29-5595-40fb-90dd-c58f2af9c5a0-kube-api-access-z2bjm\") pod \"collect-profiles-29401755-lrb2g\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:00 crc kubenswrapper[4983]: I1125 21:15:00.544716 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:01 crc kubenswrapper[4983]: I1125 21:15:01.055343 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g"] Nov 25 21:15:01 crc kubenswrapper[4983]: I1125 21:15:01.572683 4983 generic.go:334] "Generic (PLEG): container finished" podID="176fef29-5595-40fb-90dd-c58f2af9c5a0" containerID="841acef8d47dc5d4908e24eb66132f562320b848f4bd4e5baefea54d65a74828" exitCode=0 Nov 25 21:15:01 crc kubenswrapper[4983]: I1125 21:15:01.572998 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" event={"ID":"176fef29-5595-40fb-90dd-c58f2af9c5a0","Type":"ContainerDied","Data":"841acef8d47dc5d4908e24eb66132f562320b848f4bd4e5baefea54d65a74828"} Nov 25 21:15:01 crc kubenswrapper[4983]: I1125 21:15:01.573031 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" event={"ID":"176fef29-5595-40fb-90dd-c58f2af9c5a0","Type":"ContainerStarted","Data":"95f9b2f9501ef34fad442cd8b133f3c7001242e1f90c83d9af02e8f79c78f2cb"} Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.076603 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.248152 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2bjm\" (UniqueName: \"kubernetes.io/projected/176fef29-5595-40fb-90dd-c58f2af9c5a0-kube-api-access-z2bjm\") pod \"176fef29-5595-40fb-90dd-c58f2af9c5a0\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.248256 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/176fef29-5595-40fb-90dd-c58f2af9c5a0-secret-volume\") pod \"176fef29-5595-40fb-90dd-c58f2af9c5a0\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.248299 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176fef29-5595-40fb-90dd-c58f2af9c5a0-config-volume\") pod \"176fef29-5595-40fb-90dd-c58f2af9c5a0\" (UID: \"176fef29-5595-40fb-90dd-c58f2af9c5a0\") " Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.249439 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176fef29-5595-40fb-90dd-c58f2af9c5a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "176fef29-5595-40fb-90dd-c58f2af9c5a0" (UID: "176fef29-5595-40fb-90dd-c58f2af9c5a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.256029 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176fef29-5595-40fb-90dd-c58f2af9c5a0-kube-api-access-z2bjm" (OuterVolumeSpecName: "kube-api-access-z2bjm") pod "176fef29-5595-40fb-90dd-c58f2af9c5a0" (UID: "176fef29-5595-40fb-90dd-c58f2af9c5a0"). InnerVolumeSpecName "kube-api-access-z2bjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.256072 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176fef29-5595-40fb-90dd-c58f2af9c5a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "176fef29-5595-40fb-90dd-c58f2af9c5a0" (UID: "176fef29-5595-40fb-90dd-c58f2af9c5a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.350936 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2bjm\" (UniqueName: \"kubernetes.io/projected/176fef29-5595-40fb-90dd-c58f2af9c5a0-kube-api-access-z2bjm\") on node \"crc\" DevicePath \"\"" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.350972 4983 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/176fef29-5595-40fb-90dd-c58f2af9c5a0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.350987 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176fef29-5595-40fb-90dd-c58f2af9c5a0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.600367 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" event={"ID":"176fef29-5595-40fb-90dd-c58f2af9c5a0","Type":"ContainerDied","Data":"95f9b2f9501ef34fad442cd8b133f3c7001242e1f90c83d9af02e8f79c78f2cb"} Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.600412 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95f9b2f9501ef34fad442cd8b133f3c7001242e1f90c83d9af02e8f79c78f2cb" Nov 25 21:15:03 crc kubenswrapper[4983]: I1125 21:15:03.600489 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401755-lrb2g" Nov 25 21:15:04 crc kubenswrapper[4983]: I1125 21:15:04.168412 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8"] Nov 25 21:15:04 crc kubenswrapper[4983]: I1125 21:15:04.180109 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401710-jzmm8"] Nov 25 21:15:05 crc kubenswrapper[4983]: I1125 21:15:05.625174 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbad7ed5-4e2f-4c15-98f6-88b58a937f18" path="/var/lib/kubelet/pods/bbad7ed5-4e2f-4c15-98f6-88b58a937f18/volumes" Nov 25 21:15:09 crc kubenswrapper[4983]: I1125 21:15:09.927888 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:15:09 crc kubenswrapper[4983]: I1125 21:15:09.928810 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:15:37 crc kubenswrapper[4983]: I1125 21:15:37.946814 4983 scope.go:117] "RemoveContainer" containerID="f4bd79ebe0944ee7635d29d620a237c7163baf5fada0dfaab2b7af636c0a80cc" Nov 25 21:15:39 crc kubenswrapper[4983]: I1125 21:15:39.927652 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:15:39 crc kubenswrapper[4983]: I1125 21:15:39.927967 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:15:39 crc kubenswrapper[4983]: I1125 21:15:39.928012 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:15:39 crc kubenswrapper[4983]: I1125 21:15:39.928755 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:15:39 crc kubenswrapper[4983]: I1125 21:15:39.928812 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2" gracePeriod=600 Nov 25 21:15:40 crc kubenswrapper[4983]: E1125 21:15:40.159733 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373cf631_46b3_49f3_af97_be8271ce5150.slice/crio-conmon-a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373cf631_46b3_49f3_af97_be8271ce5150.slice/crio-a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2.scope\": RecentStats: unable to find data in memory cache]" Nov 25 21:15:41 crc kubenswrapper[4983]: I1125 21:15:41.069073 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2" exitCode=0 Nov 25 21:15:41 crc kubenswrapper[4983]: I1125 21:15:41.069149 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2"} Nov 25 21:15:41 crc kubenswrapper[4983]: I1125 21:15:41.069751 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504"} Nov 25 21:15:41 crc kubenswrapper[4983]: I1125 21:15:41.069781 4983 scope.go:117] "RemoveContainer" containerID="c7bc25b71120c00ba3359ec4bbe86e48b302491cd04670325befc2c08805137c" Nov 25 21:17:07 crc kubenswrapper[4983]: I1125 21:17:07.993692 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-988l7"] Nov 25 21:17:07 crc kubenswrapper[4983]: E1125 21:17:07.995464 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176fef29-5595-40fb-90dd-c58f2af9c5a0" containerName="collect-profiles" Nov 25 21:17:07 crc kubenswrapper[4983]: I1125 21:17:07.995489 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="176fef29-5595-40fb-90dd-c58f2af9c5a0" containerName="collect-profiles" Nov 25 21:17:07 crc kubenswrapper[4983]: I1125 21:17:07.995834 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="176fef29-5595-40fb-90dd-c58f2af9c5a0" containerName="collect-profiles" Nov 25 21:17:07 crc kubenswrapper[4983]: I1125 21:17:07.999161 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.024343 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-988l7"] Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.145647 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-catalog-content\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.145709 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2g2t\" (UniqueName: \"kubernetes.io/projected/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-kube-api-access-w2g2t\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.145744 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-utilities\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.248222 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-catalog-content\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.248303 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2g2t\" (UniqueName: \"kubernetes.io/projected/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-kube-api-access-w2g2t\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.248345 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-utilities\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.249000 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-utilities\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.249589 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-catalog-content\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.278702 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2g2t\" (UniqueName: \"kubernetes.io/projected/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-kube-api-access-w2g2t\") pod \"redhat-operators-988l7\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.369540 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:08 crc kubenswrapper[4983]: I1125 21:17:08.849958 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-988l7"] Nov 25 21:17:09 crc kubenswrapper[4983]: I1125 21:17:09.141986 4983 generic.go:334] "Generic (PLEG): container finished" podID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerID="9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de" exitCode=0 Nov 25 21:17:09 crc kubenswrapper[4983]: I1125 21:17:09.142049 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerDied","Data":"9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de"} Nov 25 21:17:09 crc kubenswrapper[4983]: I1125 21:17:09.142089 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerStarted","Data":"5464881bc465bbf68b010c7c9cd60989393bff33a051da96807df6898e3d509a"} Nov 25 21:17:11 crc kubenswrapper[4983]: I1125 21:17:11.173645 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerStarted","Data":"2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33"} Nov 25 21:17:12 crc kubenswrapper[4983]: I1125 21:17:12.189176 4983 generic.go:334] "Generic (PLEG): container finished" podID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerID="2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33" exitCode=0 Nov 25 21:17:12 crc kubenswrapper[4983]: I1125 21:17:12.189263 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerDied","Data":"2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33"} Nov 25 21:17:13 crc kubenswrapper[4983]: I1125 21:17:13.207344 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerStarted","Data":"51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3"} Nov 25 21:17:13 crc kubenswrapper[4983]: I1125 21:17:13.248786 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-988l7" podStartSLOduration=2.6652133879999997 podStartE2EDuration="6.248754726s" podCreationTimestamp="2025-11-25 21:17:07 +0000 UTC" firstStartedPulling="2025-11-25 21:17:09.145259412 +0000 UTC m=+3010.257792804" lastFinishedPulling="2025-11-25 21:17:12.72880075 +0000 UTC m=+3013.841334142" observedRunningTime="2025-11-25 21:17:13.23084338 +0000 UTC m=+3014.343376772" watchObservedRunningTime="2025-11-25 21:17:13.248754726 +0000 UTC m=+3014.361288118" Nov 25 21:17:18 crc kubenswrapper[4983]: I1125 21:17:18.369660 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:18 crc kubenswrapper[4983]: I1125 21:17:18.370225 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:19 crc kubenswrapper[4983]: I1125 21:17:19.422269 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-988l7" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="registry-server" probeResult="failure" output=< Nov 25 21:17:19 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 21:17:19 crc kubenswrapper[4983]: > Nov 25 21:17:28 crc kubenswrapper[4983]: I1125 21:17:28.439328 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:28 crc kubenswrapper[4983]: I1125 21:17:28.498107 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:28 crc kubenswrapper[4983]: I1125 21:17:28.695526 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-988l7"] Nov 25 21:17:30 crc kubenswrapper[4983]: I1125 21:17:30.404427 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-988l7" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="registry-server" containerID="cri-o://51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3" gracePeriod=2 Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.008090 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.077149 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-utilities\") pod \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.077459 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2g2t\" (UniqueName: \"kubernetes.io/projected/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-kube-api-access-w2g2t\") pod \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.077627 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-catalog-content\") pod \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\" (UID: \"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e\") " Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.080843 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-utilities" (OuterVolumeSpecName: "utilities") pod "14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" (UID: "14c13d4a-3b27-44d1-b50e-b9feb9dacd0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.104297 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-kube-api-access-w2g2t" (OuterVolumeSpecName: "kube-api-access-w2g2t") pod "14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" (UID: "14c13d4a-3b27-44d1-b50e-b9feb9dacd0e"). InnerVolumeSpecName "kube-api-access-w2g2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.180173 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2g2t\" (UniqueName: \"kubernetes.io/projected/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-kube-api-access-w2g2t\") on node \"crc\" DevicePath \"\"" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.180218 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.242012 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" (UID: "14c13d4a-3b27-44d1-b50e-b9feb9dacd0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.282814 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.424931 4983 generic.go:334] "Generic (PLEG): container finished" podID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerID="51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3" exitCode=0 Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.425158 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerDied","Data":"51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3"} Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.426797 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-988l7" event={"ID":"14c13d4a-3b27-44d1-b50e-b9feb9dacd0e","Type":"ContainerDied","Data":"5464881bc465bbf68b010c7c9cd60989393bff33a051da96807df6898e3d509a"} Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.426848 4983 scope.go:117] "RemoveContainer" containerID="51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.425270 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-988l7" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.473012 4983 scope.go:117] "RemoveContainer" containerID="2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.474691 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-988l7"] Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.487533 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-988l7"] Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.501792 4983 scope.go:117] "RemoveContainer" containerID="9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.559926 4983 scope.go:117] "RemoveContainer" containerID="51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3" Nov 25 21:17:31 crc kubenswrapper[4983]: E1125 21:17:31.563597 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3\": container with ID starting with 51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3 not found: ID does not exist" containerID="51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.563642 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3"} err="failed to get container status \"51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3\": rpc error: code = NotFound desc = could not find container \"51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3\": container with ID starting with 51619841d46aa86ac1ec4fd9507638d2c866161d0f5c0c85799d09b7a15cb6c3 not found: ID does not exist" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.563671 4983 scope.go:117] "RemoveContainer" containerID="2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33" Nov 25 21:17:31 crc kubenswrapper[4983]: E1125 21:17:31.564156 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33\": container with ID starting with 2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33 not found: ID does not exist" containerID="2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.564204 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33"} err="failed to get container status \"2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33\": rpc error: code = NotFound desc = could not find container \"2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33\": container with ID starting with 2729b8f22392c4b183f21516863e55133ccbe064661e23a61e8a2707f5654f33 not found: ID does not exist" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.564245 4983 scope.go:117] "RemoveContainer" containerID="9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de" Nov 25 21:17:31 crc kubenswrapper[4983]: E1125 21:17:31.564721 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de\": container with ID starting with 9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de not found: ID does not exist" containerID="9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.564760 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de"} err="failed to get container status \"9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de\": rpc error: code = NotFound desc = could not find container \"9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de\": container with ID starting with 9a09667e3ca0a28830976b2e973d090ce6ba6471b58c3236b65af43d1b21e6de not found: ID does not exist" Nov 25 21:17:31 crc kubenswrapper[4983]: I1125 21:17:31.618353 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" path="/var/lib/kubelet/pods/14c13d4a-3b27-44d1-b50e-b9feb9dacd0e/volumes" Nov 25 21:18:09 crc kubenswrapper[4983]: I1125 21:18:09.927899 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:18:09 crc kubenswrapper[4983]: I1125 21:18:09.928787 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:18:39 crc kubenswrapper[4983]: I1125 21:18:39.927541 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:18:39 crc kubenswrapper[4983]: I1125 21:18:39.928207 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:19:09 crc kubenswrapper[4983]: I1125 21:19:09.928449 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:19:09 crc kubenswrapper[4983]: I1125 21:19:09.929487 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:19:09 crc kubenswrapper[4983]: I1125 21:19:09.929606 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:19:09 crc kubenswrapper[4983]: I1125 21:19:09.930520 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:19:09 crc kubenswrapper[4983]: I1125 21:19:09.930703 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" gracePeriod=600 Nov 25 21:19:10 crc kubenswrapper[4983]: E1125 21:19:10.058432 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:19:10 crc kubenswrapper[4983]: I1125 21:19:10.665217 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" exitCode=0 Nov 25 21:19:10 crc kubenswrapper[4983]: I1125 21:19:10.665286 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504"} Nov 25 21:19:10 crc kubenswrapper[4983]: I1125 21:19:10.665844 4983 scope.go:117] "RemoveContainer" containerID="a777578c4b456f4a8673216592ae7372a1fbb2560f3d89ae83c102b06f0e54a2" Nov 25 21:19:10 crc kubenswrapper[4983]: I1125 21:19:10.671548 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:19:10 crc kubenswrapper[4983]: E1125 21:19:10.672319 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:19:23 crc kubenswrapper[4983]: I1125 21:19:23.605942 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:19:23 crc kubenswrapper[4983]: E1125 21:19:23.606984 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:19:34 crc kubenswrapper[4983]: I1125 21:19:34.605928 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:19:34 crc kubenswrapper[4983]: E1125 21:19:34.606697 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:19:48 crc kubenswrapper[4983]: I1125 21:19:48.605106 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:19:48 crc kubenswrapper[4983]: E1125 21:19:48.605955 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:20:02 crc kubenswrapper[4983]: I1125 21:20:02.606490 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:20:02 crc kubenswrapper[4983]: E1125 21:20:02.607761 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:20:17 crc kubenswrapper[4983]: I1125 21:20:17.605949 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:20:17 crc kubenswrapper[4983]: E1125 21:20:17.607376 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:20:29 crc kubenswrapper[4983]: I1125 21:20:29.619490 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:20:29 crc kubenswrapper[4983]: E1125 21:20:29.624125 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:20:40 crc kubenswrapper[4983]: I1125 21:20:40.606126 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:20:40 crc kubenswrapper[4983]: E1125 21:20:40.607387 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:20:55 crc kubenswrapper[4983]: I1125 21:20:55.605242 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:20:55 crc kubenswrapper[4983]: E1125 21:20:55.606224 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:21:08 crc kubenswrapper[4983]: I1125 21:21:08.605037 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:21:08 crc kubenswrapper[4983]: E1125 21:21:08.606252 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:21:20 crc kubenswrapper[4983]: I1125 21:21:20.605961 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:21:20 crc kubenswrapper[4983]: E1125 21:21:20.607104 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:21:31 crc kubenswrapper[4983]: I1125 21:21:31.605858 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:21:31 crc kubenswrapper[4983]: E1125 21:21:31.607061 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.082657 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llbmk"] Nov 25 21:21:41 crc kubenswrapper[4983]: E1125 21:21:41.084059 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="extract-utilities" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.084084 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="extract-utilities" Nov 25 21:21:41 crc kubenswrapper[4983]: E1125 21:21:41.084112 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="extract-content" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.084123 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="extract-content" Nov 25 21:21:41 crc kubenswrapper[4983]: E1125 21:21:41.084155 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="registry-server" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.084167 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="registry-server" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.084515 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c13d4a-3b27-44d1-b50e-b9feb9dacd0e" containerName="registry-server" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.086935 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.108984 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llbmk"] Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.132431 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-catalog-content\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.132690 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-utilities\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.132776 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/9b9a8fad-9276-4fac-a4cf-2469b6018897-kube-api-access-qp8ps\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.234902 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-catalog-content\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.235262 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-utilities\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.235288 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/9b9a8fad-9276-4fac-a4cf-2469b6018897-kube-api-access-qp8ps\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.236130 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-catalog-content\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.236388 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-utilities\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.260072 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/9b9a8fad-9276-4fac-a4cf-2469b6018897-kube-api-access-qp8ps\") pod \"certified-operators-llbmk\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.421596 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:41 crc kubenswrapper[4983]: I1125 21:21:41.955149 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llbmk"] Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.069678 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4qdjq"] Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.074376 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.081785 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qdjq"] Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.167247 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-catalog-content\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.167314 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrs4\" (UniqueName: \"kubernetes.io/projected/a616044e-ca07-44f9-9439-a1e77f98da3f-kube-api-access-pmrs4\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.167430 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-utilities\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.269644 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-utilities\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.269774 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-catalog-content\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.269833 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmrs4\" (UniqueName: \"kubernetes.io/projected/a616044e-ca07-44f9-9439-a1e77f98da3f-kube-api-access-pmrs4\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.270235 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-utilities\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.270410 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-catalog-content\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.288733 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmrs4\" (UniqueName: \"kubernetes.io/projected/a616044e-ca07-44f9-9439-a1e77f98da3f-kube-api-access-pmrs4\") pod \"redhat-marketplace-4qdjq\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.395272 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:42 crc kubenswrapper[4983]: W1125 21:21:42.736016 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda616044e_ca07_44f9_9439_a1e77f98da3f.slice/crio-90ae4b4a882aeb1ebe5e069dd2a3979ad8072c88c93d98d027eb498f31527a1f WatchSource:0}: Error finding container 90ae4b4a882aeb1ebe5e069dd2a3979ad8072c88c93d98d027eb498f31527a1f: Status 404 returned error can't find the container with id 90ae4b4a882aeb1ebe5e069dd2a3979ad8072c88c93d98d027eb498f31527a1f Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.737195 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qdjq"] Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.778021 4983 generic.go:334] "Generic (PLEG): container finished" podID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerID="985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5" exitCode=0 Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.778141 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerDied","Data":"985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5"} Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.778174 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerStarted","Data":"063fe1e043bf116b04d2954a2301ff681d46c6f37273ff149ca8d5d63aacb7e8"} Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.780372 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:21:42 crc kubenswrapper[4983]: I1125 21:21:42.781024 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qdjq" event={"ID":"a616044e-ca07-44f9-9439-a1e77f98da3f","Type":"ContainerStarted","Data":"90ae4b4a882aeb1ebe5e069dd2a3979ad8072c88c93d98d027eb498f31527a1f"} Nov 25 21:21:43 crc kubenswrapper[4983]: I1125 21:21:43.793644 4983 generic.go:334] "Generic (PLEG): container finished" podID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerID="96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6" exitCode=0 Nov 25 21:21:43 crc kubenswrapper[4983]: I1125 21:21:43.793745 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qdjq" event={"ID":"a616044e-ca07-44f9-9439-a1e77f98da3f","Type":"ContainerDied","Data":"96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6"} Nov 25 21:21:44 crc kubenswrapper[4983]: I1125 21:21:44.812183 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerStarted","Data":"7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605"} Nov 25 21:21:45 crc kubenswrapper[4983]: I1125 21:21:45.827289 4983 generic.go:334] "Generic (PLEG): container finished" podID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerID="d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e" exitCode=0 Nov 25 21:21:45 crc kubenswrapper[4983]: I1125 21:21:45.827762 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qdjq" event={"ID":"a616044e-ca07-44f9-9439-a1e77f98da3f","Type":"ContainerDied","Data":"d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e"} Nov 25 21:21:45 crc kubenswrapper[4983]: I1125 21:21:45.831963 4983 generic.go:334] "Generic (PLEG): container finished" podID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerID="7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605" exitCode=0 Nov 25 21:21:45 crc kubenswrapper[4983]: I1125 21:21:45.832027 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerDied","Data":"7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605"} Nov 25 21:21:46 crc kubenswrapper[4983]: I1125 21:21:46.607686 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:21:46 crc kubenswrapper[4983]: E1125 21:21:46.608394 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:21:46 crc kubenswrapper[4983]: I1125 21:21:46.846857 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qdjq" event={"ID":"a616044e-ca07-44f9-9439-a1e77f98da3f","Type":"ContainerStarted","Data":"2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016"} Nov 25 21:21:46 crc kubenswrapper[4983]: I1125 21:21:46.851469 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerStarted","Data":"2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917"} Nov 25 21:21:46 crc kubenswrapper[4983]: I1125 21:21:46.884688 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4qdjq" podStartSLOduration=2.368482142 podStartE2EDuration="4.884666517s" podCreationTimestamp="2025-11-25 21:21:42 +0000 UTC" firstStartedPulling="2025-11-25 21:21:43.796041591 +0000 UTC m=+3284.908574993" lastFinishedPulling="2025-11-25 21:21:46.312225956 +0000 UTC m=+3287.424759368" observedRunningTime="2025-11-25 21:21:46.872889274 +0000 UTC m=+3287.985422676" watchObservedRunningTime="2025-11-25 21:21:46.884666517 +0000 UTC m=+3287.997199919" Nov 25 21:21:46 crc kubenswrapper[4983]: I1125 21:21:46.896420 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llbmk" podStartSLOduration=2.435853367 podStartE2EDuration="5.896398148s" podCreationTimestamp="2025-11-25 21:21:41 +0000 UTC" firstStartedPulling="2025-11-25 21:21:42.780031622 +0000 UTC m=+3283.892565024" lastFinishedPulling="2025-11-25 21:21:46.240576413 +0000 UTC m=+3287.353109805" observedRunningTime="2025-11-25 21:21:46.890007658 +0000 UTC m=+3288.002541060" watchObservedRunningTime="2025-11-25 21:21:46.896398148 +0000 UTC m=+3288.008931550" Nov 25 21:21:51 crc kubenswrapper[4983]: I1125 21:21:51.422839 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:51 crc kubenswrapper[4983]: I1125 21:21:51.425062 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:51 crc kubenswrapper[4983]: I1125 21:21:51.505131 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:52 crc kubenswrapper[4983]: I1125 21:21:52.005969 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:52 crc kubenswrapper[4983]: I1125 21:21:52.069984 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llbmk"] Nov 25 21:21:52 crc kubenswrapper[4983]: I1125 21:21:52.396142 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:52 crc kubenswrapper[4983]: I1125 21:21:52.396491 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:52 crc kubenswrapper[4983]: I1125 21:21:52.450616 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:53 crc kubenswrapper[4983]: I1125 21:21:53.010022 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:53 crc kubenswrapper[4983]: I1125 21:21:53.947281 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llbmk" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="registry-server" containerID="cri-o://2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917" gracePeriod=2 Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.269317 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qdjq"] Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.522182 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.697370 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/9b9a8fad-9276-4fac-a4cf-2469b6018897-kube-api-access-qp8ps\") pod \"9b9a8fad-9276-4fac-a4cf-2469b6018897\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.697599 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-utilities\") pod \"9b9a8fad-9276-4fac-a4cf-2469b6018897\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.697687 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-catalog-content\") pod \"9b9a8fad-9276-4fac-a4cf-2469b6018897\" (UID: \"9b9a8fad-9276-4fac-a4cf-2469b6018897\") " Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.699300 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-utilities" (OuterVolumeSpecName: "utilities") pod "9b9a8fad-9276-4fac-a4cf-2469b6018897" (UID: "9b9a8fad-9276-4fac-a4cf-2469b6018897"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.709683 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9a8fad-9276-4fac-a4cf-2469b6018897-kube-api-access-qp8ps" (OuterVolumeSpecName: "kube-api-access-qp8ps") pod "9b9a8fad-9276-4fac-a4cf-2469b6018897" (UID: "9b9a8fad-9276-4fac-a4cf-2469b6018897"). InnerVolumeSpecName "kube-api-access-qp8ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.737665 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b9a8fad-9276-4fac-a4cf-2469b6018897" (UID: "9b9a8fad-9276-4fac-a4cf-2469b6018897"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.800395 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.800443 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b9a8fad-9276-4fac-a4cf-2469b6018897-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.800464 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp8ps\" (UniqueName: \"kubernetes.io/projected/9b9a8fad-9276-4fac-a4cf-2469b6018897-kube-api-access-qp8ps\") on node \"crc\" DevicePath \"\"" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.963325 4983 generic.go:334] "Generic (PLEG): container finished" podID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerID="2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917" exitCode=0 Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.963422 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llbmk" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.963454 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerDied","Data":"2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917"} Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.964065 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llbmk" event={"ID":"9b9a8fad-9276-4fac-a4cf-2469b6018897","Type":"ContainerDied","Data":"063fe1e043bf116b04d2954a2301ff681d46c6f37273ff149ca8d5d63aacb7e8"} Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.964102 4983 scope.go:117] "RemoveContainer" containerID="2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917" Nov 25 21:21:54 crc kubenswrapper[4983]: I1125 21:21:54.965506 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4qdjq" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="registry-server" containerID="cri-o://2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016" gracePeriod=2 Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.006440 4983 scope.go:117] "RemoveContainer" containerID="7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.031352 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llbmk"] Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.042692 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llbmk"] Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.094837 4983 scope.go:117] "RemoveContainer" containerID="985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.210329 4983 scope.go:117] "RemoveContainer" containerID="2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917" Nov 25 21:21:55 crc kubenswrapper[4983]: E1125 21:21:55.236166 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917\": container with ID starting with 2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917 not found: ID does not exist" containerID="2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.236253 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917"} err="failed to get container status \"2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917\": rpc error: code = NotFound desc = could not find container \"2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917\": container with ID starting with 2d34385cc87361fb1aeba47a53b1e0525e3f351976e823082688c7dd75b7f917 not found: ID does not exist" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.236298 4983 scope.go:117] "RemoveContainer" containerID="7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605" Nov 25 21:21:55 crc kubenswrapper[4983]: E1125 21:21:55.236870 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605\": container with ID starting with 7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605 not found: ID does not exist" containerID="7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.236916 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605"} err="failed to get container status \"7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605\": rpc error: code = NotFound desc = could not find container \"7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605\": container with ID starting with 7999f1bdc078bdc4955af1c234da77fcd76047f6654d657109ae003acffb5605 not found: ID does not exist" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.236943 4983 scope.go:117] "RemoveContainer" containerID="985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5" Nov 25 21:21:55 crc kubenswrapper[4983]: E1125 21:21:55.237442 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5\": container with ID starting with 985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5 not found: ID does not exist" containerID="985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.237483 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5"} err="failed to get container status \"985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5\": rpc error: code = NotFound desc = could not find container \"985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5\": container with ID starting with 985044f49919a8ab4ae3a03f89e3df2f0d254489de32302aac17c96b2e3efcf5 not found: ID does not exist" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.623004 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" path="/var/lib/kubelet/pods/9b9a8fad-9276-4fac-a4cf-2469b6018897/volumes" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.623732 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.719020 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-utilities\") pod \"a616044e-ca07-44f9-9439-a1e77f98da3f\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.719096 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-catalog-content\") pod \"a616044e-ca07-44f9-9439-a1e77f98da3f\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.719296 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmrs4\" (UniqueName: \"kubernetes.io/projected/a616044e-ca07-44f9-9439-a1e77f98da3f-kube-api-access-pmrs4\") pod \"a616044e-ca07-44f9-9439-a1e77f98da3f\" (UID: \"a616044e-ca07-44f9-9439-a1e77f98da3f\") " Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.719771 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-utilities" (OuterVolumeSpecName: "utilities") pod "a616044e-ca07-44f9-9439-a1e77f98da3f" (UID: "a616044e-ca07-44f9-9439-a1e77f98da3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.719983 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.728781 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a616044e-ca07-44f9-9439-a1e77f98da3f-kube-api-access-pmrs4" (OuterVolumeSpecName: "kube-api-access-pmrs4") pod "a616044e-ca07-44f9-9439-a1e77f98da3f" (UID: "a616044e-ca07-44f9-9439-a1e77f98da3f"). InnerVolumeSpecName "kube-api-access-pmrs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.755150 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a616044e-ca07-44f9-9439-a1e77f98da3f" (UID: "a616044e-ca07-44f9-9439-a1e77f98da3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.821495 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmrs4\" (UniqueName: \"kubernetes.io/projected/a616044e-ca07-44f9-9439-a1e77f98da3f-kube-api-access-pmrs4\") on node \"crc\" DevicePath \"\"" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.821537 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a616044e-ca07-44f9-9439-a1e77f98da3f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.982379 4983 generic.go:334] "Generic (PLEG): container finished" podID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerID="2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016" exitCode=0 Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.982457 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qdjq" event={"ID":"a616044e-ca07-44f9-9439-a1e77f98da3f","Type":"ContainerDied","Data":"2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016"} Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.982481 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4qdjq" Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.982491 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4qdjq" event={"ID":"a616044e-ca07-44f9-9439-a1e77f98da3f","Type":"ContainerDied","Data":"90ae4b4a882aeb1ebe5e069dd2a3979ad8072c88c93d98d027eb498f31527a1f"} Nov 25 21:21:55 crc kubenswrapper[4983]: I1125 21:21:55.982513 4983 scope.go:117] "RemoveContainer" containerID="2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.014658 4983 scope.go:117] "RemoveContainer" containerID="d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.029745 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qdjq"] Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.040740 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4qdjq"] Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.041698 4983 scope.go:117] "RemoveContainer" containerID="96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.063588 4983 scope.go:117] "RemoveContainer" containerID="2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016" Nov 25 21:21:56 crc kubenswrapper[4983]: E1125 21:21:56.064834 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016\": container with ID starting with 2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016 not found: ID does not exist" containerID="2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.064890 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016"} err="failed to get container status \"2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016\": rpc error: code = NotFound desc = could not find container \"2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016\": container with ID starting with 2f6912559a8cc8e90653e9764e0f77bc566b9b776597f5957a11cdc183a87016 not found: ID does not exist" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.064927 4983 scope.go:117] "RemoveContainer" containerID="d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e" Nov 25 21:21:56 crc kubenswrapper[4983]: E1125 21:21:56.065314 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e\": container with ID starting with d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e not found: ID does not exist" containerID="d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.065350 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e"} err="failed to get container status \"d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e\": rpc error: code = NotFound desc = could not find container \"d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e\": container with ID starting with d1c11d98cfff8809ccf449123c9a8a0231ba85f75897bdaea4b967a9e710344e not found: ID does not exist" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.065378 4983 scope.go:117] "RemoveContainer" containerID="96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6" Nov 25 21:21:56 crc kubenswrapper[4983]: E1125 21:21:56.065668 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6\": container with ID starting with 96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6 not found: ID does not exist" containerID="96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6" Nov 25 21:21:56 crc kubenswrapper[4983]: I1125 21:21:56.065701 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6"} err="failed to get container status \"96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6\": rpc error: code = NotFound desc = could not find container \"96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6\": container with ID starting with 96be42e474feae69a859d7be591b0c16d888df20cf14bcf8ed760d5b645c26e6 not found: ID does not exist" Nov 25 21:21:57 crc kubenswrapper[4983]: I1125 21:21:57.622081 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" path="/var/lib/kubelet/pods/a616044e-ca07-44f9-9439-a1e77f98da3f/volumes" Nov 25 21:21:59 crc kubenswrapper[4983]: I1125 21:21:59.624950 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:21:59 crc kubenswrapper[4983]: E1125 21:21:59.626400 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:22:11 crc kubenswrapper[4983]: I1125 21:22:11.611717 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:22:11 crc kubenswrapper[4983]: E1125 21:22:11.613154 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:22:26 crc kubenswrapper[4983]: I1125 21:22:26.605078 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:22:26 crc kubenswrapper[4983]: E1125 21:22:26.606041 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:22:37 crc kubenswrapper[4983]: I1125 21:22:37.605757 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:22:37 crc kubenswrapper[4983]: E1125 21:22:37.606530 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:22:48 crc kubenswrapper[4983]: I1125 21:22:48.604867 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:22:48 crc kubenswrapper[4983]: E1125 21:22:48.605672 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:23:03 crc kubenswrapper[4983]: I1125 21:23:03.606004 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:23:03 crc kubenswrapper[4983]: E1125 21:23:03.607598 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:23:14 crc kubenswrapper[4983]: I1125 21:23:14.606185 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:23:14 crc kubenswrapper[4983]: E1125 21:23:14.610438 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.065186 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4qrv"] Nov 25 21:23:19 crc kubenswrapper[4983]: E1125 21:23:19.066917 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="extract-utilities" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.066933 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="extract-utilities" Nov 25 21:23:19 crc kubenswrapper[4983]: E1125 21:23:19.066951 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="extract-utilities" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.066957 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="extract-utilities" Nov 25 21:23:19 crc kubenswrapper[4983]: E1125 21:23:19.066975 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="registry-server" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.066981 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="registry-server" Nov 25 21:23:19 crc kubenswrapper[4983]: E1125 21:23:19.066995 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="registry-server" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.067001 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="registry-server" Nov 25 21:23:19 crc kubenswrapper[4983]: E1125 21:23:19.067019 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="extract-content" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.067025 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="extract-content" Nov 25 21:23:19 crc kubenswrapper[4983]: E1125 21:23:19.067041 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="extract-content" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.067048 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="extract-content" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.067219 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="a616044e-ca07-44f9-9439-a1e77f98da3f" containerName="registry-server" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.067231 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9a8fad-9276-4fac-a4cf-2469b6018897" containerName="registry-server" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.068489 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.089506 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4qrv"] Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.238439 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-utilities\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.238621 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-catalog-content\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.238679 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7csxz\" (UniqueName: \"kubernetes.io/projected/bec9a42c-52c3-44bd-8528-a64bcd47d71d-kube-api-access-7csxz\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.342068 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-catalog-content\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.342140 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7csxz\" (UniqueName: \"kubernetes.io/projected/bec9a42c-52c3-44bd-8528-a64bcd47d71d-kube-api-access-7csxz\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.342214 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-utilities\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.342547 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-catalog-content\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.342637 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-utilities\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.379771 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7csxz\" (UniqueName: \"kubernetes.io/projected/bec9a42c-52c3-44bd-8528-a64bcd47d71d-kube-api-access-7csxz\") pod \"community-operators-f4qrv\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.408963 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:19 crc kubenswrapper[4983]: I1125 21:23:19.985000 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4qrv"] Nov 25 21:23:20 crc kubenswrapper[4983]: I1125 21:23:20.035218 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerStarted","Data":"f4c5e9fcfb73d571c3c313b33935224489cf60b3a1269a5f354637195d1fff7d"} Nov 25 21:23:21 crc kubenswrapper[4983]: I1125 21:23:21.053334 4983 generic.go:334] "Generic (PLEG): container finished" podID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerID="1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf" exitCode=0 Nov 25 21:23:21 crc kubenswrapper[4983]: I1125 21:23:21.053448 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerDied","Data":"1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf"} Nov 25 21:23:22 crc kubenswrapper[4983]: I1125 21:23:22.067382 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerStarted","Data":"51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c"} Nov 25 21:23:23 crc kubenswrapper[4983]: I1125 21:23:23.084275 4983 generic.go:334] "Generic (PLEG): container finished" podID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerID="51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c" exitCode=0 Nov 25 21:23:23 crc kubenswrapper[4983]: I1125 21:23:23.084401 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerDied","Data":"51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c"} Nov 25 21:23:24 crc kubenswrapper[4983]: I1125 21:23:24.100315 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerStarted","Data":"7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4"} Nov 25 21:23:24 crc kubenswrapper[4983]: I1125 21:23:24.136744 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4qrv" podStartSLOduration=2.536391005 podStartE2EDuration="5.136720604s" podCreationTimestamp="2025-11-25 21:23:19 +0000 UTC" firstStartedPulling="2025-11-25 21:23:21.056522102 +0000 UTC m=+3382.169055524" lastFinishedPulling="2025-11-25 21:23:23.656851691 +0000 UTC m=+3384.769385123" observedRunningTime="2025-11-25 21:23:24.126233115 +0000 UTC m=+3385.238766527" watchObservedRunningTime="2025-11-25 21:23:24.136720604 +0000 UTC m=+3385.249253996" Nov 25 21:23:28 crc kubenswrapper[4983]: I1125 21:23:28.605155 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:23:28 crc kubenswrapper[4983]: E1125 21:23:28.607113 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:23:29 crc kubenswrapper[4983]: I1125 21:23:29.410099 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:29 crc kubenswrapper[4983]: I1125 21:23:29.411963 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:29 crc kubenswrapper[4983]: I1125 21:23:29.482881 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:30 crc kubenswrapper[4983]: I1125 21:23:30.256046 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:30 crc kubenswrapper[4983]: I1125 21:23:30.354614 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4qrv"] Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.199158 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4qrv" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="registry-server" containerID="cri-o://7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4" gracePeriod=2 Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.798402 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.871142 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7csxz\" (UniqueName: \"kubernetes.io/projected/bec9a42c-52c3-44bd-8528-a64bcd47d71d-kube-api-access-7csxz\") pod \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.871211 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-utilities\") pod \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.871491 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-catalog-content\") pod \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\" (UID: \"bec9a42c-52c3-44bd-8528-a64bcd47d71d\") " Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.872416 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-utilities" (OuterVolumeSpecName: "utilities") pod "bec9a42c-52c3-44bd-8528-a64bcd47d71d" (UID: "bec9a42c-52c3-44bd-8528-a64bcd47d71d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.882839 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec9a42c-52c3-44bd-8528-a64bcd47d71d-kube-api-access-7csxz" (OuterVolumeSpecName: "kube-api-access-7csxz") pod "bec9a42c-52c3-44bd-8528-a64bcd47d71d" (UID: "bec9a42c-52c3-44bd-8528-a64bcd47d71d"). InnerVolumeSpecName "kube-api-access-7csxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.885675 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7csxz\" (UniqueName: \"kubernetes.io/projected/bec9a42c-52c3-44bd-8528-a64bcd47d71d-kube-api-access-7csxz\") on node \"crc\" DevicePath \"\"" Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.885724 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:23:32 crc kubenswrapper[4983]: I1125 21:23:32.992906 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bec9a42c-52c3-44bd-8528-a64bcd47d71d" (UID: "bec9a42c-52c3-44bd-8528-a64bcd47d71d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.089757 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec9a42c-52c3-44bd-8528-a64bcd47d71d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.212693 4983 generic.go:334] "Generic (PLEG): container finished" podID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerID="7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4" exitCode=0 Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.212765 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerDied","Data":"7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4"} Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.212814 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4qrv" event={"ID":"bec9a42c-52c3-44bd-8528-a64bcd47d71d","Type":"ContainerDied","Data":"f4c5e9fcfb73d571c3c313b33935224489cf60b3a1269a5f354637195d1fff7d"} Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.212806 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4qrv" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.212835 4983 scope.go:117] "RemoveContainer" containerID="7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.259405 4983 scope.go:117] "RemoveContainer" containerID="51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.272983 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4qrv"] Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.291104 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4qrv"] Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.325768 4983 scope.go:117] "RemoveContainer" containerID="1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.350324 4983 scope.go:117] "RemoveContainer" containerID="7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4" Nov 25 21:23:33 crc kubenswrapper[4983]: E1125 21:23:33.350940 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4\": container with ID starting with 7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4 not found: ID does not exist" containerID="7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.350998 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4"} err="failed to get container status \"7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4\": rpc error: code = NotFound desc = could not find container \"7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4\": container with ID starting with 7dc26687a25f08df922102785472b687cc4d43bde8c4592f255a4516bc613db4 not found: ID does not exist" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.351035 4983 scope.go:117] "RemoveContainer" containerID="51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c" Nov 25 21:23:33 crc kubenswrapper[4983]: E1125 21:23:33.355340 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c\": container with ID starting with 51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c not found: ID does not exist" containerID="51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.355379 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c"} err="failed to get container status \"51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c\": rpc error: code = NotFound desc = could not find container \"51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c\": container with ID starting with 51af1f316817a2208d8d650443b8063a16f384e71256913f48ddd11c2cbc231c not found: ID does not exist" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.355406 4983 scope.go:117] "RemoveContainer" containerID="1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf" Nov 25 21:23:33 crc kubenswrapper[4983]: E1125 21:23:33.358412 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf\": container with ID starting with 1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf not found: ID does not exist" containerID="1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.358442 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf"} err="failed to get container status \"1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf\": rpc error: code = NotFound desc = could not find container \"1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf\": container with ID starting with 1d6f21f56045b12f81b3133031c22e917769706c28566a42892f7e18724732cf not found: ID does not exist" Nov 25 21:23:33 crc kubenswrapper[4983]: I1125 21:23:33.622852 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" path="/var/lib/kubelet/pods/bec9a42c-52c3-44bd-8528-a64bcd47d71d/volumes" Nov 25 21:23:40 crc kubenswrapper[4983]: I1125 21:23:40.606062 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:23:40 crc kubenswrapper[4983]: E1125 21:23:40.607491 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:23:51 crc kubenswrapper[4983]: I1125 21:23:51.605301 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:23:51 crc kubenswrapper[4983]: E1125 21:23:51.606243 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:24:03 crc kubenswrapper[4983]: I1125 21:24:03.605592 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:24:03 crc kubenswrapper[4983]: E1125 21:24:03.606632 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:24:14 crc kubenswrapper[4983]: I1125 21:24:14.604962 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:24:15 crc kubenswrapper[4983]: I1125 21:24:15.738672 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c"} Nov 25 21:26:33 crc kubenswrapper[4983]: I1125 21:26:33.410737 4983 generic.go:334] "Generic (PLEG): container finished" podID="66868750-3f73-47fe-a353-f88441e69915" containerID="aef8ffa0ea1f9ff5c55df9a9130e33fd26a9120f834136737f54dfea63627dcc" exitCode=0 Nov 25 21:26:33 crc kubenswrapper[4983]: I1125 21:26:33.410862 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"66868750-3f73-47fe-a353-f88441e69915","Type":"ContainerDied","Data":"aef8ffa0ea1f9ff5c55df9a9130e33fd26a9120f834136737f54dfea63627dcc"} Nov 25 21:26:34 crc kubenswrapper[4983]: I1125 21:26:34.985276 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066338 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-openstack-config\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066444 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ssh-key\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066542 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-workdir\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066611 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx5zk\" (UniqueName: \"kubernetes.io/projected/66868750-3f73-47fe-a353-f88441e69915-kube-api-access-wx5zk\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066666 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-openstack-config-secret\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066756 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-config-data\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066805 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ca-certs\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066877 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-temporary\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.066975 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"66868750-3f73-47fe-a353-f88441e69915\" (UID: \"66868750-3f73-47fe-a353-f88441e69915\") " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.068629 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.068881 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-config-data" (OuterVolumeSpecName: "config-data") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.073781 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66868750-3f73-47fe-a353-f88441e69915-kube-api-access-wx5zk" (OuterVolumeSpecName: "kube-api-access-wx5zk") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "kube-api-access-wx5zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.075355 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.084931 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.095889 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.102699 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.120977 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.126230 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "66868750-3f73-47fe-a353-f88441e69915" (UID: "66868750-3f73-47fe-a353-f88441e69915"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169194 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169228 4983 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169241 4983 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169256 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx5zk\" (UniqueName: \"kubernetes.io/projected/66868750-3f73-47fe-a353-f88441e69915-kube-api-access-wx5zk\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169267 4983 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169277 4983 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66868750-3f73-47fe-a353-f88441e69915-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169285 4983 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/66868750-3f73-47fe-a353-f88441e69915-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169296 4983 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/66868750-3f73-47fe-a353-f88441e69915-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.169336 4983 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.189531 4983 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.271329 4983 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.448502 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"66868750-3f73-47fe-a353-f88441e69915","Type":"ContainerDied","Data":"bfb77d05266bef34788572078867ecd5fec1c371b52660399fd6bf36049835c7"} Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.448544 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfb77d05266bef34788572078867ecd5fec1c371b52660399fd6bf36049835c7" Nov 25 21:26:35 crc kubenswrapper[4983]: I1125 21:26:35.448582 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 21:26:39 crc kubenswrapper[4983]: I1125 21:26:39.927638 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:26:39 crc kubenswrapper[4983]: I1125 21:26:39.929780 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.340477 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 21:26:42 crc kubenswrapper[4983]: E1125 21:26:42.341647 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66868750-3f73-47fe-a353-f88441e69915" containerName="tempest-tests-tempest-tests-runner" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.341668 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="66868750-3f73-47fe-a353-f88441e69915" containerName="tempest-tests-tempest-tests-runner" Nov 25 21:26:42 crc kubenswrapper[4983]: E1125 21:26:42.341687 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="extract-content" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.341695 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="extract-content" Nov 25 21:26:42 crc kubenswrapper[4983]: E1125 21:26:42.341720 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="registry-server" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.341727 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="registry-server" Nov 25 21:26:42 crc kubenswrapper[4983]: E1125 21:26:42.341769 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="extract-utilities" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.341778 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="extract-utilities" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.342018 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="66868750-3f73-47fe-a353-f88441e69915" containerName="tempest-tests-tempest-tests-runner" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.342056 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec9a42c-52c3-44bd-8528-a64bcd47d71d" containerName="registry-server" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.342864 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.345702 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dzhtx" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.364850 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.531627 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.531843 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72z5\" (UniqueName: \"kubernetes.io/projected/8eafda73-e6d5-4b11-bd89-75308a7ca93b-kube-api-access-m72z5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.633533 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.633760 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m72z5\" (UniqueName: \"kubernetes.io/projected/8eafda73-e6d5-4b11-bd89-75308a7ca93b-kube-api-access-m72z5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.634172 4983 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.672535 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.675537 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m72z5\" (UniqueName: \"kubernetes.io/projected/8eafda73-e6d5-4b11-bd89-75308a7ca93b-kube-api-access-m72z5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8eafda73-e6d5-4b11-bd89-75308a7ca93b\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:42 crc kubenswrapper[4983]: I1125 21:26:42.970228 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 21:26:43 crc kubenswrapper[4983]: I1125 21:26:43.946424 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 21:26:43 crc kubenswrapper[4983]: I1125 21:26:43.957048 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:26:44 crc kubenswrapper[4983]: I1125 21:26:44.554847 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8eafda73-e6d5-4b11-bd89-75308a7ca93b","Type":"ContainerStarted","Data":"8ccfa2fc58d1c4e26f3c116bd301377d5398c42bc8d1399a416408c603a972b4"} Nov 25 21:26:45 crc kubenswrapper[4983]: I1125 21:26:45.572158 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8eafda73-e6d5-4b11-bd89-75308a7ca93b","Type":"ContainerStarted","Data":"c20b427b77eaa8135fc9383ad5b959cad259a40352754cc8c4ba5fb7538db37e"} Nov 25 21:26:45 crc kubenswrapper[4983]: I1125 21:26:45.604744 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.802704406 podStartE2EDuration="3.604723663s" podCreationTimestamp="2025-11-25 21:26:42 +0000 UTC" firstStartedPulling="2025-11-25 21:26:43.956770073 +0000 UTC m=+3585.069303475" lastFinishedPulling="2025-11-25 21:26:44.75878934 +0000 UTC m=+3585.871322732" observedRunningTime="2025-11-25 21:26:45.598030535 +0000 UTC m=+3586.710563967" watchObservedRunningTime="2025-11-25 21:26:45.604723663 +0000 UTC m=+3586.717257065" Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.851980 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkppv/must-gather-frc7g"] Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.858963 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.863175 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zkppv"/"openshift-service-ca.crt" Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.863450 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zkppv"/"kube-root-ca.crt" Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.864016 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zkppv"/"default-dockercfg-z2l66" Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.909805 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zkppv/must-gather-frc7g"] Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.974504 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxspx\" (UniqueName: \"kubernetes.io/projected/5a5a15e5-44af-45a8-88d1-6621a706d11f-kube-api-access-nxspx\") pod \"must-gather-frc7g\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:08 crc kubenswrapper[4983]: I1125 21:27:08.974654 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a5a15e5-44af-45a8-88d1-6621a706d11f-must-gather-output\") pod \"must-gather-frc7g\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.076929 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxspx\" (UniqueName: \"kubernetes.io/projected/5a5a15e5-44af-45a8-88d1-6621a706d11f-kube-api-access-nxspx\") pod \"must-gather-frc7g\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.077009 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a5a15e5-44af-45a8-88d1-6621a706d11f-must-gather-output\") pod \"must-gather-frc7g\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.077391 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a5a15e5-44af-45a8-88d1-6621a706d11f-must-gather-output\") pod \"must-gather-frc7g\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.100036 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxspx\" (UniqueName: \"kubernetes.io/projected/5a5a15e5-44af-45a8-88d1-6621a706d11f-kube-api-access-nxspx\") pod \"must-gather-frc7g\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.187720 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.716737 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zkppv/must-gather-frc7g"] Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.893122 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/must-gather-frc7g" event={"ID":"5a5a15e5-44af-45a8-88d1-6621a706d11f","Type":"ContainerStarted","Data":"dc4561088057a04b3e2c5a10f2d281a37ab5f5714552f09ca6a1d41f10b2c317"} Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.927641 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:27:09 crc kubenswrapper[4983]: I1125 21:27:09.927701 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:27:10 crc kubenswrapper[4983]: I1125 21:27:10.913303 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r9m9k"] Nov 25 21:27:10 crc kubenswrapper[4983]: I1125 21:27:10.915430 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:10 crc kubenswrapper[4983]: I1125 21:27:10.926899 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9m9k"] Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.018530 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-utilities\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.018615 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-catalog-content\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.018702 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrmsl\" (UniqueName: \"kubernetes.io/projected/52623d1f-03b1-435a-904b-ce3b1420be85-kube-api-access-rrmsl\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.121179 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-catalog-content\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.121234 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-catalog-content\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.121300 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrmsl\" (UniqueName: \"kubernetes.io/projected/52623d1f-03b1-435a-904b-ce3b1420be85-kube-api-access-rrmsl\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.121781 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-utilities\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.122049 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-utilities\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.140707 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrmsl\" (UniqueName: \"kubernetes.io/projected/52623d1f-03b1-435a-904b-ce3b1420be85-kube-api-access-rrmsl\") pod \"redhat-operators-r9m9k\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.252681 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.742330 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r9m9k"] Nov 25 21:27:11 crc kubenswrapper[4983]: W1125 21:27:11.759671 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52623d1f_03b1_435a_904b_ce3b1420be85.slice/crio-9d9ee7ee87a1725f3052946abc474264d44856f3656617bd268396df3aa2c175 WatchSource:0}: Error finding container 9d9ee7ee87a1725f3052946abc474264d44856f3656617bd268396df3aa2c175: Status 404 returned error can't find the container with id 9d9ee7ee87a1725f3052946abc474264d44856f3656617bd268396df3aa2c175 Nov 25 21:27:11 crc kubenswrapper[4983]: I1125 21:27:11.924424 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9m9k" event={"ID":"52623d1f-03b1-435a-904b-ce3b1420be85","Type":"ContainerStarted","Data":"9d9ee7ee87a1725f3052946abc474264d44856f3656617bd268396df3aa2c175"} Nov 25 21:27:12 crc kubenswrapper[4983]: I1125 21:27:12.937040 4983 generic.go:334] "Generic (PLEG): container finished" podID="52623d1f-03b1-435a-904b-ce3b1420be85" containerID="31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25" exitCode=0 Nov 25 21:27:12 crc kubenswrapper[4983]: I1125 21:27:12.937115 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9m9k" event={"ID":"52623d1f-03b1-435a-904b-ce3b1420be85","Type":"ContainerDied","Data":"31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25"} Nov 25 21:27:14 crc kubenswrapper[4983]: I1125 21:27:14.959165 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/must-gather-frc7g" event={"ID":"5a5a15e5-44af-45a8-88d1-6621a706d11f","Type":"ContainerStarted","Data":"bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba"} Nov 25 21:27:14 crc kubenswrapper[4983]: I1125 21:27:14.959236 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/must-gather-frc7g" event={"ID":"5a5a15e5-44af-45a8-88d1-6621a706d11f","Type":"ContainerStarted","Data":"0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044"} Nov 25 21:27:14 crc kubenswrapper[4983]: I1125 21:27:14.982076 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zkppv/must-gather-frc7g" podStartSLOduration=2.6461273480000003 podStartE2EDuration="6.982055185s" podCreationTimestamp="2025-11-25 21:27:08 +0000 UTC" firstStartedPulling="2025-11-25 21:27:09.704988627 +0000 UTC m=+3610.817522019" lastFinishedPulling="2025-11-25 21:27:14.040916454 +0000 UTC m=+3615.153449856" observedRunningTime="2025-11-25 21:27:14.979373093 +0000 UTC m=+3616.091906485" watchObservedRunningTime="2025-11-25 21:27:14.982055185 +0000 UTC m=+3616.094588617" Nov 25 21:27:15 crc kubenswrapper[4983]: I1125 21:27:15.972320 4983 generic.go:334] "Generic (PLEG): container finished" podID="52623d1f-03b1-435a-904b-ce3b1420be85" containerID="c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905" exitCode=0 Nov 25 21:27:15 crc kubenswrapper[4983]: I1125 21:27:15.972382 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9m9k" event={"ID":"52623d1f-03b1-435a-904b-ce3b1420be85","Type":"ContainerDied","Data":"c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905"} Nov 25 21:27:16 crc kubenswrapper[4983]: I1125 21:27:16.985249 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9m9k" event={"ID":"52623d1f-03b1-435a-904b-ce3b1420be85","Type":"ContainerStarted","Data":"7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8"} Nov 25 21:27:17 crc kubenswrapper[4983]: I1125 21:27:17.010018 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r9m9k" podStartSLOduration=4.546834789 podStartE2EDuration="7.009998475s" podCreationTimestamp="2025-11-25 21:27:10 +0000 UTC" firstStartedPulling="2025-11-25 21:27:13.966636012 +0000 UTC m=+3615.079169404" lastFinishedPulling="2025-11-25 21:27:16.429799698 +0000 UTC m=+3617.542333090" observedRunningTime="2025-11-25 21:27:17.005784613 +0000 UTC m=+3618.118318045" watchObservedRunningTime="2025-11-25 21:27:17.009998475 +0000 UTC m=+3618.122531877" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.374605 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkppv/crc-debug-mmv4b"] Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.376050 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.475504 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/173da074-036a-4037-a7cf-c30c946efd42-host\") pod \"crc-debug-mmv4b\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.475766 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8fq\" (UniqueName: \"kubernetes.io/projected/173da074-036a-4037-a7cf-c30c946efd42-kube-api-access-5x8fq\") pod \"crc-debug-mmv4b\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.578249 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x8fq\" (UniqueName: \"kubernetes.io/projected/173da074-036a-4037-a7cf-c30c946efd42-kube-api-access-5x8fq\") pod \"crc-debug-mmv4b\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.578445 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/173da074-036a-4037-a7cf-c30c946efd42-host\") pod \"crc-debug-mmv4b\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.578585 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/173da074-036a-4037-a7cf-c30c946efd42-host\") pod \"crc-debug-mmv4b\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.610458 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x8fq\" (UniqueName: \"kubernetes.io/projected/173da074-036a-4037-a7cf-c30c946efd42-kube-api-access-5x8fq\") pod \"crc-debug-mmv4b\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:18 crc kubenswrapper[4983]: I1125 21:27:18.705476 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:27:19 crc kubenswrapper[4983]: I1125 21:27:19.016440 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" event={"ID":"173da074-036a-4037-a7cf-c30c946efd42","Type":"ContainerStarted","Data":"cdd7d003cf354c4079893eb19fdabc5cf69b6eb960d4f286c9ad22e7306d6deb"} Nov 25 21:27:21 crc kubenswrapper[4983]: I1125 21:27:21.253535 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:21 crc kubenswrapper[4983]: I1125 21:27:21.254004 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:22 crc kubenswrapper[4983]: I1125 21:27:22.304864 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r9m9k" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="registry-server" probeResult="failure" output=< Nov 25 21:27:22 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 21:27:22 crc kubenswrapper[4983]: > Nov 25 21:27:31 crc kubenswrapper[4983]: I1125 21:27:31.327628 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:31 crc kubenswrapper[4983]: I1125 21:27:31.408106 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:31 crc kubenswrapper[4983]: I1125 21:27:31.568720 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9m9k"] Nov 25 21:27:32 crc kubenswrapper[4983]: I1125 21:27:32.138495 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" event={"ID":"173da074-036a-4037-a7cf-c30c946efd42","Type":"ContainerStarted","Data":"be81215a5a335193b612d7e0af14668732a91b76236c9b9fb5b2e6e385a5a75e"} Nov 25 21:27:32 crc kubenswrapper[4983]: I1125 21:27:32.166311 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" podStartSLOduration=1.58183338 podStartE2EDuration="14.166295684s" podCreationTimestamp="2025-11-25 21:27:18 +0000 UTC" firstStartedPulling="2025-11-25 21:27:18.739932192 +0000 UTC m=+3619.852465584" lastFinishedPulling="2025-11-25 21:27:31.324394496 +0000 UTC m=+3632.436927888" observedRunningTime="2025-11-25 21:27:32.160643842 +0000 UTC m=+3633.273177234" watchObservedRunningTime="2025-11-25 21:27:32.166295684 +0000 UTC m=+3633.278829076" Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.149521 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r9m9k" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="registry-server" containerID="cri-o://7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8" gracePeriod=2 Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.697301 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.815335 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrmsl\" (UniqueName: \"kubernetes.io/projected/52623d1f-03b1-435a-904b-ce3b1420be85-kube-api-access-rrmsl\") pod \"52623d1f-03b1-435a-904b-ce3b1420be85\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.815733 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-catalog-content\") pod \"52623d1f-03b1-435a-904b-ce3b1420be85\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.815850 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-utilities\") pod \"52623d1f-03b1-435a-904b-ce3b1420be85\" (UID: \"52623d1f-03b1-435a-904b-ce3b1420be85\") " Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.816487 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-utilities" (OuterVolumeSpecName: "utilities") pod "52623d1f-03b1-435a-904b-ce3b1420be85" (UID: "52623d1f-03b1-435a-904b-ce3b1420be85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.822001 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52623d1f-03b1-435a-904b-ce3b1420be85-kube-api-access-rrmsl" (OuterVolumeSpecName: "kube-api-access-rrmsl") pod "52623d1f-03b1-435a-904b-ce3b1420be85" (UID: "52623d1f-03b1-435a-904b-ce3b1420be85"). InnerVolumeSpecName "kube-api-access-rrmsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.917992 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrmsl\" (UniqueName: \"kubernetes.io/projected/52623d1f-03b1-435a-904b-ce3b1420be85-kube-api-access-rrmsl\") on node \"crc\" DevicePath \"\"" Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.918034 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:27:33 crc kubenswrapper[4983]: I1125 21:27:33.923592 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52623d1f-03b1-435a-904b-ce3b1420be85" (UID: "52623d1f-03b1-435a-904b-ce3b1420be85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.020266 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52623d1f-03b1-435a-904b-ce3b1420be85-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.159524 4983 generic.go:334] "Generic (PLEG): container finished" podID="52623d1f-03b1-435a-904b-ce3b1420be85" containerID="7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8" exitCode=0 Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.159588 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9m9k" event={"ID":"52623d1f-03b1-435a-904b-ce3b1420be85","Type":"ContainerDied","Data":"7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8"} Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.159604 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r9m9k" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.159620 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r9m9k" event={"ID":"52623d1f-03b1-435a-904b-ce3b1420be85","Type":"ContainerDied","Data":"9d9ee7ee87a1725f3052946abc474264d44856f3656617bd268396df3aa2c175"} Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.159642 4983 scope.go:117] "RemoveContainer" containerID="7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.187391 4983 scope.go:117] "RemoveContainer" containerID="c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.192241 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r9m9k"] Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.202698 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r9m9k"] Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.219537 4983 scope.go:117] "RemoveContainer" containerID="31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.269355 4983 scope.go:117] "RemoveContainer" containerID="7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8" Nov 25 21:27:34 crc kubenswrapper[4983]: E1125 21:27:34.270564 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8\": container with ID starting with 7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8 not found: ID does not exist" containerID="7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.270596 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8"} err="failed to get container status \"7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8\": rpc error: code = NotFound desc = could not find container \"7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8\": container with ID starting with 7d6c7d8ed324d71f2fca256b493304b02cc442e58bf5fd06c79f55271d7ce4e8 not found: ID does not exist" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.270616 4983 scope.go:117] "RemoveContainer" containerID="c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905" Nov 25 21:27:34 crc kubenswrapper[4983]: E1125 21:27:34.271328 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905\": container with ID starting with c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905 not found: ID does not exist" containerID="c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.271347 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905"} err="failed to get container status \"c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905\": rpc error: code = NotFound desc = could not find container \"c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905\": container with ID starting with c1a3ca50770f5eaba1ed5fdad7bd065025a637570f433ca2861cd71da5ad2905 not found: ID does not exist" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.271361 4983 scope.go:117] "RemoveContainer" containerID="31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25" Nov 25 21:27:34 crc kubenswrapper[4983]: E1125 21:27:34.272184 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25\": container with ID starting with 31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25 not found: ID does not exist" containerID="31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25" Nov 25 21:27:34 crc kubenswrapper[4983]: I1125 21:27:34.272209 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25"} err="failed to get container status \"31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25\": rpc error: code = NotFound desc = could not find container \"31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25\": container with ID starting with 31a2c4afb8b060ff73a8069bcde94da4b1160064e13b326ea07b591eea5a0b25 not found: ID does not exist" Nov 25 21:27:35 crc kubenswrapper[4983]: I1125 21:27:35.617427 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" path="/var/lib/kubelet/pods/52623d1f-03b1-435a-904b-ce3b1420be85/volumes" Nov 25 21:27:39 crc kubenswrapper[4983]: I1125 21:27:39.927404 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:27:39 crc kubenswrapper[4983]: I1125 21:27:39.928110 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:27:39 crc kubenswrapper[4983]: I1125 21:27:39.928165 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:27:39 crc kubenswrapper[4983]: I1125 21:27:39.928961 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:27:39 crc kubenswrapper[4983]: I1125 21:27:39.929018 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c" gracePeriod=600 Nov 25 21:27:40 crc kubenswrapper[4983]: E1125 21:27:40.105189 4983 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373cf631_46b3_49f3_af97_be8271ce5150.slice/crio-1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c.scope\": RecentStats: unable to find data in memory cache]" Nov 25 21:27:40 crc kubenswrapper[4983]: I1125 21:27:40.223807 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c" exitCode=0 Nov 25 21:27:40 crc kubenswrapper[4983]: I1125 21:27:40.223851 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c"} Nov 25 21:27:40 crc kubenswrapper[4983]: I1125 21:27:40.223882 4983 scope.go:117] "RemoveContainer" containerID="5a74e76482821f51eb1fa2797d0baa2bd65767c95ed288236b1c2f3fba889504" Nov 25 21:27:44 crc kubenswrapper[4983]: I1125 21:27:44.260219 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f"} Nov 25 21:28:12 crc kubenswrapper[4983]: I1125 21:28:12.603091 4983 generic.go:334] "Generic (PLEG): container finished" podID="173da074-036a-4037-a7cf-c30c946efd42" containerID="be81215a5a335193b612d7e0af14668732a91b76236c9b9fb5b2e6e385a5a75e" exitCode=0 Nov 25 21:28:12 crc kubenswrapper[4983]: I1125 21:28:12.603168 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" event={"ID":"173da074-036a-4037-a7cf-c30c946efd42","Type":"ContainerDied","Data":"be81215a5a335193b612d7e0af14668732a91b76236c9b9fb5b2e6e385a5a75e"} Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.757034 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.803826 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkppv/crc-debug-mmv4b"] Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.813225 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkppv/crc-debug-mmv4b"] Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.894267 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x8fq\" (UniqueName: \"kubernetes.io/projected/173da074-036a-4037-a7cf-c30c946efd42-kube-api-access-5x8fq\") pod \"173da074-036a-4037-a7cf-c30c946efd42\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.894494 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/173da074-036a-4037-a7cf-c30c946efd42-host\") pod \"173da074-036a-4037-a7cf-c30c946efd42\" (UID: \"173da074-036a-4037-a7cf-c30c946efd42\") " Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.894987 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/173da074-036a-4037-a7cf-c30c946efd42-host" (OuterVolumeSpecName: "host") pod "173da074-036a-4037-a7cf-c30c946efd42" (UID: "173da074-036a-4037-a7cf-c30c946efd42"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.895702 4983 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/173da074-036a-4037-a7cf-c30c946efd42-host\") on node \"crc\" DevicePath \"\"" Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.904034 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173da074-036a-4037-a7cf-c30c946efd42-kube-api-access-5x8fq" (OuterVolumeSpecName: "kube-api-access-5x8fq") pod "173da074-036a-4037-a7cf-c30c946efd42" (UID: "173da074-036a-4037-a7cf-c30c946efd42"). InnerVolumeSpecName "kube-api-access-5x8fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:28:13 crc kubenswrapper[4983]: I1125 21:28:13.998549 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x8fq\" (UniqueName: \"kubernetes.io/projected/173da074-036a-4037-a7cf-c30c946efd42-kube-api-access-5x8fq\") on node \"crc\" DevicePath \"\"" Nov 25 21:28:14 crc kubenswrapper[4983]: I1125 21:28:14.629819 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd7d003cf354c4079893eb19fdabc5cf69b6eb960d4f286c9ad22e7306d6deb" Nov 25 21:28:14 crc kubenswrapper[4983]: I1125 21:28:14.629929 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-mmv4b" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.003733 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkppv/crc-debug-4kcrv"] Nov 25 21:28:15 crc kubenswrapper[4983]: E1125 21:28:15.004176 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="registry-server" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.004248 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="registry-server" Nov 25 21:28:15 crc kubenswrapper[4983]: E1125 21:28:15.004263 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173da074-036a-4037-a7cf-c30c946efd42" containerName="container-00" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.004271 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="173da074-036a-4037-a7cf-c30c946efd42" containerName="container-00" Nov 25 21:28:15 crc kubenswrapper[4983]: E1125 21:28:15.004284 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="extract-content" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.004292 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="extract-content" Nov 25 21:28:15 crc kubenswrapper[4983]: E1125 21:28:15.004313 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="extract-utilities" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.004321 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="extract-utilities" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.004538 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="52623d1f-03b1-435a-904b-ce3b1420be85" containerName="registry-server" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.004718 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="173da074-036a-4037-a7cf-c30c946efd42" containerName="container-00" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.005786 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.125679 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/485098f3-0e8b-4bc1-a38f-a70d31cdd308-host\") pod \"crc-debug-4kcrv\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.125746 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cwpw\" (UniqueName: \"kubernetes.io/projected/485098f3-0e8b-4bc1-a38f-a70d31cdd308-kube-api-access-5cwpw\") pod \"crc-debug-4kcrv\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.227174 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/485098f3-0e8b-4bc1-a38f-a70d31cdd308-host\") pod \"crc-debug-4kcrv\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.227219 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cwpw\" (UniqueName: \"kubernetes.io/projected/485098f3-0e8b-4bc1-a38f-a70d31cdd308-kube-api-access-5cwpw\") pod \"crc-debug-4kcrv\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.227357 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/485098f3-0e8b-4bc1-a38f-a70d31cdd308-host\") pod \"crc-debug-4kcrv\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.249145 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cwpw\" (UniqueName: \"kubernetes.io/projected/485098f3-0e8b-4bc1-a38f-a70d31cdd308-kube-api-access-5cwpw\") pod \"crc-debug-4kcrv\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.330028 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.621142 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173da074-036a-4037-a7cf-c30c946efd42" path="/var/lib/kubelet/pods/173da074-036a-4037-a7cf-c30c946efd42/volumes" Nov 25 21:28:15 crc kubenswrapper[4983]: I1125 21:28:15.650876 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-4kcrv" event={"ID":"485098f3-0e8b-4bc1-a38f-a70d31cdd308","Type":"ContainerStarted","Data":"b062fb51bf316c3b10ba4bfebbf1d588db50b95e3ad203ceaddf8464257ba11f"} Nov 25 21:28:16 crc kubenswrapper[4983]: I1125 21:28:16.664615 4983 generic.go:334] "Generic (PLEG): container finished" podID="485098f3-0e8b-4bc1-a38f-a70d31cdd308" containerID="f89efd1cc16c208ab71814d3b97526b5b1b7d708363fc4a834f4902143cd3a7a" exitCode=0 Nov 25 21:28:16 crc kubenswrapper[4983]: I1125 21:28:16.664672 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-4kcrv" event={"ID":"485098f3-0e8b-4bc1-a38f-a70d31cdd308","Type":"ContainerDied","Data":"f89efd1cc16c208ab71814d3b97526b5b1b7d708363fc4a834f4902143cd3a7a"} Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.212566 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkppv/crc-debug-4kcrv"] Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.221004 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkppv/crc-debug-4kcrv"] Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.805027 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.978720 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/485098f3-0e8b-4bc1-a38f-a70d31cdd308-host\") pod \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.978899 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/485098f3-0e8b-4bc1-a38f-a70d31cdd308-host" (OuterVolumeSpecName: "host") pod "485098f3-0e8b-4bc1-a38f-a70d31cdd308" (UID: "485098f3-0e8b-4bc1-a38f-a70d31cdd308"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.979697 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cwpw\" (UniqueName: \"kubernetes.io/projected/485098f3-0e8b-4bc1-a38f-a70d31cdd308-kube-api-access-5cwpw\") pod \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\" (UID: \"485098f3-0e8b-4bc1-a38f-a70d31cdd308\") " Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.980702 4983 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/485098f3-0e8b-4bc1-a38f-a70d31cdd308-host\") on node \"crc\" DevicePath \"\"" Nov 25 21:28:17 crc kubenswrapper[4983]: I1125 21:28:17.990204 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485098f3-0e8b-4bc1-a38f-a70d31cdd308-kube-api-access-5cwpw" (OuterVolumeSpecName: "kube-api-access-5cwpw") pod "485098f3-0e8b-4bc1-a38f-a70d31cdd308" (UID: "485098f3-0e8b-4bc1-a38f-a70d31cdd308"). InnerVolumeSpecName "kube-api-access-5cwpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.083424 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cwpw\" (UniqueName: \"kubernetes.io/projected/485098f3-0e8b-4bc1-a38f-a70d31cdd308-kube-api-access-5cwpw\") on node \"crc\" DevicePath \"\"" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.447532 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zkppv/crc-debug-htkdd"] Nov 25 21:28:18 crc kubenswrapper[4983]: E1125 21:28:18.448078 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485098f3-0e8b-4bc1-a38f-a70d31cdd308" containerName="container-00" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.448103 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="485098f3-0e8b-4bc1-a38f-a70d31cdd308" containerName="container-00" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.448330 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="485098f3-0e8b-4bc1-a38f-a70d31cdd308" containerName="container-00" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.449045 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.600467 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcq4z\" (UniqueName: \"kubernetes.io/projected/3fab102e-55ec-4c52-91a7-b3db9d74be29-kube-api-access-tcq4z\") pod \"crc-debug-htkdd\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.601174 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fab102e-55ec-4c52-91a7-b3db9d74be29-host\") pod \"crc-debug-htkdd\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.697408 4983 scope.go:117] "RemoveContainer" containerID="f89efd1cc16c208ab71814d3b97526b5b1b7d708363fc4a834f4902143cd3a7a" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.697474 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-4kcrv" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.704075 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcq4z\" (UniqueName: \"kubernetes.io/projected/3fab102e-55ec-4c52-91a7-b3db9d74be29-kube-api-access-tcq4z\") pod \"crc-debug-htkdd\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.704469 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fab102e-55ec-4c52-91a7-b3db9d74be29-host\") pod \"crc-debug-htkdd\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.704610 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fab102e-55ec-4c52-91a7-b3db9d74be29-host\") pod \"crc-debug-htkdd\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.722753 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcq4z\" (UniqueName: \"kubernetes.io/projected/3fab102e-55ec-4c52-91a7-b3db9d74be29-kube-api-access-tcq4z\") pod \"crc-debug-htkdd\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: I1125 21:28:18.771413 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:18 crc kubenswrapper[4983]: W1125 21:28:18.807863 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fab102e_55ec_4c52_91a7_b3db9d74be29.slice/crio-dae1be3352076171b01032a11e02e214ae073d9efadabb12e1f1cd566c865bc1 WatchSource:0}: Error finding container dae1be3352076171b01032a11e02e214ae073d9efadabb12e1f1cd566c865bc1: Status 404 returned error can't find the container with id dae1be3352076171b01032a11e02e214ae073d9efadabb12e1f1cd566c865bc1 Nov 25 21:28:19 crc kubenswrapper[4983]: I1125 21:28:19.624624 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485098f3-0e8b-4bc1-a38f-a70d31cdd308" path="/var/lib/kubelet/pods/485098f3-0e8b-4bc1-a38f-a70d31cdd308/volumes" Nov 25 21:28:19 crc kubenswrapper[4983]: I1125 21:28:19.718036 4983 generic.go:334] "Generic (PLEG): container finished" podID="3fab102e-55ec-4c52-91a7-b3db9d74be29" containerID="16a5078a87db6480d4a1186b9ea6653058fe1d8c5e496dc67e257389d56b29ca" exitCode=0 Nov 25 21:28:19 crc kubenswrapper[4983]: I1125 21:28:19.718110 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-htkdd" event={"ID":"3fab102e-55ec-4c52-91a7-b3db9d74be29","Type":"ContainerDied","Data":"16a5078a87db6480d4a1186b9ea6653058fe1d8c5e496dc67e257389d56b29ca"} Nov 25 21:28:19 crc kubenswrapper[4983]: I1125 21:28:19.718154 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/crc-debug-htkdd" event={"ID":"3fab102e-55ec-4c52-91a7-b3db9d74be29","Type":"ContainerStarted","Data":"dae1be3352076171b01032a11e02e214ae073d9efadabb12e1f1cd566c865bc1"} Nov 25 21:28:19 crc kubenswrapper[4983]: I1125 21:28:19.768113 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkppv/crc-debug-htkdd"] Nov 25 21:28:19 crc kubenswrapper[4983]: I1125 21:28:19.775441 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkppv/crc-debug-htkdd"] Nov 25 21:28:20 crc kubenswrapper[4983]: I1125 21:28:20.878829 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:20 crc kubenswrapper[4983]: I1125 21:28:20.959449 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcq4z\" (UniqueName: \"kubernetes.io/projected/3fab102e-55ec-4c52-91a7-b3db9d74be29-kube-api-access-tcq4z\") pod \"3fab102e-55ec-4c52-91a7-b3db9d74be29\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " Nov 25 21:28:20 crc kubenswrapper[4983]: I1125 21:28:20.959992 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fab102e-55ec-4c52-91a7-b3db9d74be29-host\") pod \"3fab102e-55ec-4c52-91a7-b3db9d74be29\" (UID: \"3fab102e-55ec-4c52-91a7-b3db9d74be29\") " Nov 25 21:28:20 crc kubenswrapper[4983]: I1125 21:28:20.960066 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fab102e-55ec-4c52-91a7-b3db9d74be29-host" (OuterVolumeSpecName: "host") pod "3fab102e-55ec-4c52-91a7-b3db9d74be29" (UID: "3fab102e-55ec-4c52-91a7-b3db9d74be29"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 21:28:20 crc kubenswrapper[4983]: I1125 21:28:20.960629 4983 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3fab102e-55ec-4c52-91a7-b3db9d74be29-host\") on node \"crc\" DevicePath \"\"" Nov 25 21:28:20 crc kubenswrapper[4983]: I1125 21:28:20.977689 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fab102e-55ec-4c52-91a7-b3db9d74be29-kube-api-access-tcq4z" (OuterVolumeSpecName: "kube-api-access-tcq4z") pod "3fab102e-55ec-4c52-91a7-b3db9d74be29" (UID: "3fab102e-55ec-4c52-91a7-b3db9d74be29"). InnerVolumeSpecName "kube-api-access-tcq4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:28:21 crc kubenswrapper[4983]: I1125 21:28:21.062635 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcq4z\" (UniqueName: \"kubernetes.io/projected/3fab102e-55ec-4c52-91a7-b3db9d74be29-kube-api-access-tcq4z\") on node \"crc\" DevicePath \"\"" Nov 25 21:28:21 crc kubenswrapper[4983]: I1125 21:28:21.618692 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fab102e-55ec-4c52-91a7-b3db9d74be29" path="/var/lib/kubelet/pods/3fab102e-55ec-4c52-91a7-b3db9d74be29/volumes" Nov 25 21:28:21 crc kubenswrapper[4983]: I1125 21:28:21.761866 4983 scope.go:117] "RemoveContainer" containerID="16a5078a87db6480d4a1186b9ea6653058fe1d8c5e496dc67e257389d56b29ca" Nov 25 21:28:21 crc kubenswrapper[4983]: I1125 21:28:21.761872 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/crc-debug-htkdd" Nov 25 21:28:35 crc kubenswrapper[4983]: I1125 21:28:35.660492 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b69f99f98-bmrwt_d239c72e-850f-45f1-9f9f-568c2bee1546/barbican-api/0.log" Nov 25 21:28:35 crc kubenswrapper[4983]: I1125 21:28:35.779287 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b69f99f98-bmrwt_d239c72e-850f-45f1-9f9f-568c2bee1546/barbican-api-log/0.log" Nov 25 21:28:35 crc kubenswrapper[4983]: I1125 21:28:35.829676 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d94b4b49b-7bcmx_067348dd-7070-4616-871c-46a8ec91be00/barbican-keystone-listener/0.log" Nov 25 21:28:35 crc kubenswrapper[4983]: I1125 21:28:35.932694 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d94b4b49b-7bcmx_067348dd-7070-4616-871c-46a8ec91be00/barbican-keystone-listener-log/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.075492 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64d7f8cd7f-49776_d50f667a-e040-4db9-83d1-a1f72b138332/barbican-worker-log/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.079682 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64d7f8cd7f-49776_d50f667a-e040-4db9-83d1-a1f72b138332/barbican-worker/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.263281 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h_96bb1f23-94d5-4a68-995b-da2394c75158/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.310024 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/ceilometer-central-agent/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.356845 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/ceilometer-notification-agent/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.449667 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/proxy-httpd/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.474239 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/sg-core/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.634538 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_af04822c-335c-4c44-9711-19c401c54c9f/cinder-api/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.670794 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_af04822c-335c-4c44-9711-19c401c54c9f/cinder-api-log/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.780735 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb5e604e-0461-4f5c-acd3-412096243892/cinder-scheduler/0.log" Nov 25 21:28:36 crc kubenswrapper[4983]: I1125 21:28:36.898923 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb5e604e-0461-4f5c-acd3-412096243892/probe/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.013406 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bft2l_0cc000c0-25d9-4390-b50f-da1ba38b6f7c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.112098 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp_da7ae86f-6623-4fd0-b7f1-ad16a2056571/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.205669 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-qb46c_1be61955-ba9d-4fef-8bb8-41bae01eb8a2/init/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.384928 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-qb46c_1be61955-ba9d-4fef-8bb8-41bae01eb8a2/dnsmasq-dns/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.396861 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-qb46c_1be61955-ba9d-4fef-8bb8-41bae01eb8a2/init/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.458861 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q_b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.589408 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_76343139-3638-4cc2-a865-ddb20d2d35a6/glance-httpd/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.612996 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_76343139-3638-4cc2-a865-ddb20d2d35a6/glance-log/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.799398 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e2eefb7f-341f-4f91-8b67-2fc45217b414/glance-httpd/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.865531 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e2eefb7f-341f-4f91-8b67-2fc45217b414/glance-log/0.log" Nov 25 21:28:37 crc kubenswrapper[4983]: I1125 21:28:37.981916 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7f9d7c8cfb-s259l_ed474a92-4901-4ded-89c1-736427d72c92/horizon/0.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.088759 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v_e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.347495 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jt8gn_aa0b2190-bcf1-4f2a-8e87-4805b514d3bf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.397943 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7f9d7c8cfb-s259l_ed474a92-4901-4ded-89c1-736427d72c92/horizon-log/0.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.598657 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401741-hmw9c_294b565c-28a4-45a2-a9af-8eefed6b82a4/keystone-cron/0.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.617521 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b58ff8778-fz55h_9aca410d-c0fd-4ba7-81c0-434416f8dfbd/keystone-api/0.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.823997 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ae259426-d08e-4d8f-b3e7-f06847f1c2da/kube-state-metrics/3.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.859997 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ae259426-d08e-4d8f-b3e7-f06847f1c2da/kube-state-metrics/2.log" Nov 25 21:28:38 crc kubenswrapper[4983]: I1125 21:28:38.887308 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c_4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:39 crc kubenswrapper[4983]: I1125 21:28:39.193124 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-778b5c8885-ww4ht_14a0dfa8-a664-45d8-bb1d-731f807b1427/neutron-api/0.log" Nov 25 21:28:39 crc kubenswrapper[4983]: I1125 21:28:39.235270 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-778b5c8885-ww4ht_14a0dfa8-a664-45d8-bb1d-731f807b1427/neutron-httpd/0.log" Nov 25 21:28:39 crc kubenswrapper[4983]: I1125 21:28:39.375976 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph_996735a0-8e3c-4c62-9403-3e02669b7c63/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:39 crc kubenswrapper[4983]: I1125 21:28:39.871646 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_279216cc-b7af-430b-95ff-07b9330eea8c/nova-cell0-conductor-conductor/0.log" Nov 25 21:28:39 crc kubenswrapper[4983]: I1125 21:28:39.882226 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b199cea8-e855-4316-b80b-8cad8bce9f45/nova-api-log/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.067705 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b199cea8-e855-4316-b80b-8cad8bce9f45/nova-api-api/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.108295 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3eb68b08-479a-4831-b6a5-ad478a3922e5/nova-cell1-conductor-conductor/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.209658 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_968ee4da-4360-486b-a70a-a805a19a6b42/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.370867 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-b7m44_7ce9c984-8450-479b-aa5f-58f81943cf56/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.496801 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_eba27c66-d8be-4e3c-a39c-2f521c69a3d6/nova-metadata-log/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.893276 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0aa32390-cf93-44b1-b27f-4b66ffb61a41/nova-scheduler-scheduler/0.log" Nov 25 21:28:40 crc kubenswrapper[4983]: I1125 21:28:40.919672 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_13cd3da7-02fa-42c2-a62a-527df23e92b1/mysql-bootstrap/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.091003 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_13cd3da7-02fa-42c2-a62a-527df23e92b1/galera/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.126618 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_13cd3da7-02fa-42c2-a62a-527df23e92b1/mysql-bootstrap/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.309057 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca63c157-60df-45de-854f-03989f565e8f/mysql-bootstrap/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.553602 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca63c157-60df-45de-854f-03989f565e8f/galera/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.563287 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca63c157-60df-45de-854f-03989f565e8f/mysql-bootstrap/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.722799 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b2fefe1-596f-4e7c-8de9-b3c019ed40ea/openstackclient/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.769753 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_eba27c66-d8be-4e3c-a39c-2f521c69a3d6/nova-metadata-metadata/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.802280 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fgn5f_cd8b1052-9050-4771-8be4-3138d9c54d62/ovn-controller/0.log" Nov 25 21:28:41 crc kubenswrapper[4983]: I1125 21:28:41.979597 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jxnnh_18caf88a-0da7-4144-9c11-301f0a49f3fb/openstack-network-exporter/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.065259 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovsdb-server-init/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.245828 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovsdb-server/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.255364 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovs-vswitchd/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.276684 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovsdb-server-init/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.457258 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qkd9n_71b3a358-6645-404b-8d14-cb6371e7fce4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.541438 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b06f6c03-dbba-48c9-901d-8cf6ef8048b1/openstack-network-exporter/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.551582 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b06f6c03-dbba-48c9-901d-8cf6ef8048b1/ovn-northd/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.833759 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_255fbb78-ee7b-4e1f-bd48-d260792d9be4/ovsdbserver-nb/0.log" Nov 25 21:28:42 crc kubenswrapper[4983]: I1125 21:28:42.841792 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_255fbb78-ee7b-4e1f-bd48-d260792d9be4/openstack-network-exporter/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.021252 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_410c54ac-f4e0-4c9f-873e-939b19eb303b/openstack-network-exporter/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.090094 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_410c54ac-f4e0-4c9f-873e-939b19eb303b/ovsdbserver-sb/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.095666 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffcbfd47b-hljtd_545d9a00-2ce8-463f-b16c-6b7c0ac426be/placement-api/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.306614 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffcbfd47b-hljtd_545d9a00-2ce8-463f-b16c-6b7c0ac426be/placement-log/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.323197 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5063408-1226-4adc-86e9-194a32761df9/setup-container/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.575207 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df26c674-505f-44d6-9fd2-24d745739946/setup-container/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.589763 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5063408-1226-4adc-86e9-194a32761df9/rabbitmq/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.600686 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5063408-1226-4adc-86e9-194a32761df9/setup-container/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.825142 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df26c674-505f-44d6-9fd2-24d745739946/rabbitmq/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.848131 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt_11f44fe7-6b39-418d-9c54-d0f05318f412/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:43 crc kubenswrapper[4983]: I1125 21:28:43.864861 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df26c674-505f-44d6-9fd2-24d745739946/setup-container/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.072633 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5csrn_f1c31e5c-0dca-4993-b845-286b47b3b6ee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.147383 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz_599f17a5-8483-4c0e-aca0-27677abeba08/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.275125 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6lsbm_283ae6fd-423e-4c78-9c5b-85aab813c0b5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.384111 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cdnxr_3080e73e-fbc1-4a80-827c-386f923dd01b/ssh-known-hosts-edpm-deployment/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.595248 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6466c6df55-xffs5_b05ecf5f-8220-40f4-b459-27d2dd7c6fbf/proxy-httpd/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.611111 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6466c6df55-xffs5_b05ecf5f-8220-40f4-b459-27d2dd7c6fbf/proxy-server/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.695040 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-rln2d_9681d7cb-ab2c-4458-bc07-a7d278f16fd2/swift-ring-rebalance/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.848151 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-auditor/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.903216 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-reaper/0.log" Nov 25 21:28:44 crc kubenswrapper[4983]: I1125 21:28:44.904880 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-replicator/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.061736 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-server/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.084526 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-auditor/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.118861 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-replicator/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.137656 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-server/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.270689 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-updater/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.285197 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-auditor/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.350766 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-expirer/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.384788 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-replicator/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.474857 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-server/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.496602 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-updater/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.575666 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/rsync/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.585398 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/swift-recon-cron/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.771062 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm_34445193-9a8d-4ebd-ac42-d8348c11e375/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:45 crc kubenswrapper[4983]: I1125 21:28:45.820411 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_66868750-3f73-47fe-a353-f88441e69915/tempest-tests-tempest-tests-runner/0.log" Nov 25 21:28:46 crc kubenswrapper[4983]: I1125 21:28:46.030890 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8eafda73-e6d5-4b11-bd89-75308a7ca93b/test-operator-logs-container/0.log" Nov 25 21:28:46 crc kubenswrapper[4983]: I1125 21:28:46.047878 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf_5f2f45e7-9dd0-4273-bb23-9191f1a5ea93/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:28:57 crc kubenswrapper[4983]: I1125 21:28:57.316476 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6ba42cf7-cc02-4214-a4e5-c20d987aed64/memcached/0.log" Nov 25 21:29:11 crc kubenswrapper[4983]: I1125 21:29:11.607923 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/util/0.log" Nov 25 21:29:11 crc kubenswrapper[4983]: I1125 21:29:11.826154 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/util/0.log" Nov 25 21:29:11 crc kubenswrapper[4983]: I1125 21:29:11.831526 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/pull/0.log" Nov 25 21:29:11 crc kubenswrapper[4983]: I1125 21:29:11.871826 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/pull/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.075544 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/extract/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.089832 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/pull/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.105949 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/util/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.369370 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-nf6tq_1ec6aefb-824e-4248-ac00-c1d0b526edc6/manager/2.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.373676 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-nf6tq_1ec6aefb-824e-4248-ac00-c1d0b526edc6/kube-rbac-proxy/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.413227 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-nf6tq_1ec6aefb-824e-4248-ac00-c1d0b526edc6/manager/1.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.579149 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-b9lnt_cf765330-a0f9-4603-a92b-4aec8feaeafb/kube-rbac-proxy/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.596822 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-b9lnt_cf765330-a0f9-4603-a92b-4aec8feaeafb/manager/2.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.630010 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-b9lnt_cf765330-a0f9-4603-a92b-4aec8feaeafb/manager/1.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.766741 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-lzn84_00a7db78-81a7-481d-a20e-135c60e139e3/kube-rbac-proxy/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.797837 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-lzn84_00a7db78-81a7-481d-a20e-135c60e139e3/manager/2.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.865149 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-lzn84_00a7db78-81a7-481d-a20e-135c60e139e3/manager/1.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.962959 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-xvxp7_da827172-6e3a-42a7-814c-cdfcc18d48d6/kube-rbac-proxy/0.log" Nov 25 21:29:12 crc kubenswrapper[4983]: I1125 21:29:12.988423 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-xvxp7_da827172-6e3a-42a7-814c-cdfcc18d48d6/manager/2.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.059250 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-xvxp7_da827172-6e3a-42a7-814c-cdfcc18d48d6/manager/1.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.131211 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-t5knb_48b3567f-5b1a-4f14-891c-775c05e2d768/kube-rbac-proxy/0.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.177036 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-t5knb_48b3567f-5b1a-4f14-891c-775c05e2d768/manager/2.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.267703 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-t5knb_48b3567f-5b1a-4f14-891c-775c05e2d768/manager/1.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.354427 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-cctnq_72f1d28e-26ff-43d3-bd93-54c21d9cdd70/kube-rbac-proxy/0.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.395431 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-cctnq_72f1d28e-26ff-43d3-bd93-54c21d9cdd70/manager/2.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.439668 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-cctnq_72f1d28e-26ff-43d3-bd93-54c21d9cdd70/manager/1.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.552876 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-qlm9k_0d3d657c-e179-43c7-abca-c37f8396d1cd/kube-rbac-proxy/0.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.588747 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-qlm9k_0d3d657c-e179-43c7-abca-c37f8396d1cd/manager/2.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.618614 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-qlm9k_0d3d657c-e179-43c7-abca-c37f8396d1cd/manager/1.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.752876 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-9zpxb_e1668e7f-55bb-415c-b378-1c70483b30a6/kube-rbac-proxy/0.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.761278 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-9zpxb_e1668e7f-55bb-415c-b378-1c70483b30a6/manager/3.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.829470 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-9zpxb_e1668e7f-55bb-415c-b378-1c70483b30a6/manager/2.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.924803 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-fchv4_e5edd26f-9ffb-4be8-86c1-99d32e812816/kube-rbac-proxy/0.log" Nov 25 21:29:13 crc kubenswrapper[4983]: I1125 21:29:13.961908 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-fchv4_e5edd26f-9ffb-4be8-86c1-99d32e812816/manager/2.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.021729 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-fchv4_e5edd26f-9ffb-4be8-86c1-99d32e812816/manager/1.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.156028 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-f8bh4_2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98/manager/2.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.169783 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-f8bh4_2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98/kube-rbac-proxy/0.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.239724 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-f8bh4_2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98/manager/1.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.316887 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_afff7723-36e3-42ae-9fac-9f8fdb86d839/kube-rbac-proxy/0.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.371852 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_afff7723-36e3-42ae-9fac-9f8fdb86d839/manager/2.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.442100 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_afff7723-36e3-42ae-9fac-9f8fdb86d839/manager/1.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.536435 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-ljpb8_badb10c7-4c8c-42c4-b481-221377fa7255/kube-rbac-proxy/0.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.576563 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-ljpb8_badb10c7-4c8c-42c4-b481-221377fa7255/manager/2.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.624965 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-ljpb8_badb10c7-4c8c-42c4-b481-221377fa7255/manager/1.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.710735 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-dj7nt_9d7c78e4-4890-4527-9db4-131842750615/kube-rbac-proxy/0.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.797615 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-dj7nt_9d7c78e4-4890-4527-9db4-131842750615/manager/2.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.815365 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-dj7nt_9d7c78e4-4890-4527-9db4-131842750615/manager/1.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.922404 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-p8q9g_a096f840-35b3-48c1-8c0e-762b67b8bde0/kube-rbac-proxy/0.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.947249 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-p8q9g_a096f840-35b3-48c1-8c0e-762b67b8bde0/manager/3.log" Nov 25 21:29:14 crc kubenswrapper[4983]: I1125 21:29:14.977366 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-p8q9g_a096f840-35b3-48c1-8c0e-762b67b8bde0/manager/2.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.080729 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg_4743af06-44e2-438a-82b7-bf32b0f5ca03/kube-rbac-proxy/0.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.123460 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg_4743af06-44e2-438a-82b7-bf32b0f5ca03/manager/0.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.140509 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg_4743af06-44e2-438a-82b7-bf32b0f5ca03/manager/1.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.279866 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cf7cd9d4-bwfnd_f32095da-1fdc-4d52-b082-98b39652cdc6/manager/1.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.367367 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8dd87645-g89th_668ad5ef-ec7f-4239-94c5-8bb868f653ce/operator/1.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.539928 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cf7cd9d4-bwfnd_f32095da-1fdc-4d52-b082-98b39652cdc6/manager/2.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.609640 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8dd87645-g89th_668ad5ef-ec7f-4239-94c5-8bb868f653ce/operator/0.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.741260 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-stqz2_2a6b637c-d929-42fa-89c6-8e5af3746cc1/registry-server/0.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.812699 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-zc5rq_d7302bdd-d74f-4d95-a354-42fcd52bf22e/manager/2.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.828970 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-zc5rq_d7302bdd-d74f-4d95-a354-42fcd52bf22e/kube-rbac-proxy/0.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.931613 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-zc5rq_d7302bdd-d74f-4d95-a354-42fcd52bf22e/manager/1.log" Nov 25 21:29:15 crc kubenswrapper[4983]: I1125 21:29:15.985900 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-mhjtj_64141c1d-799a-4d72-aa99-e54975052879/kube-rbac-proxy/0.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.025319 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-mhjtj_64141c1d-799a-4d72-aa99-e54975052879/manager/2.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.069513 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-mhjtj_64141c1d-799a-4d72-aa99-e54975052879/manager/1.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.153720 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bwf7d_ff284fea-7792-40e1-8ede-f52412a6c014/operator/2.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.187625 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bwf7d_ff284fea-7792-40e1-8ede-f52412a6c014/operator/1.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.262828 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-4c95t_5b14316c-9639-4934-a5e9-5381d2797ef5/kube-rbac-proxy/0.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.340125 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-4c95t_5b14316c-9639-4934-a5e9-5381d2797ef5/manager/2.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.423013 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-4c95t_5b14316c-9639-4934-a5e9-5381d2797ef5/manager/1.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.454015 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b7bb74d9f-m9bbx_92f1d8fa-69cf-49c3-a616-82a185ff8dd5/kube-rbac-proxy/0.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.471855 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b7bb74d9f-m9bbx_92f1d8fa-69cf-49c3-a616-82a185ff8dd5/manager/2.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.541825 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b7bb74d9f-m9bbx_92f1d8fa-69cf-49c3-a616-82a185ff8dd5/manager/1.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.626597 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-lr7wt_ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042/kube-rbac-proxy/0.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.698372 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-lr7wt_ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042/manager/1.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.740834 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-lr7wt_ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042/manager/0.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.797767 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-rpfhz_1e439ca1-98f3-4650-96da-1e4c1b2da37e/kube-rbac-proxy/0.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.896406 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-rpfhz_1e439ca1-98f3-4650-96da-1e4c1b2da37e/manager/2.log" Nov 25 21:29:16 crc kubenswrapper[4983]: I1125 21:29:16.942935 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-rpfhz_1e439ca1-98f3-4650-96da-1e4c1b2da37e/manager/1.log" Nov 25 21:29:37 crc kubenswrapper[4983]: I1125 21:29:37.206258 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d2mch_ec800216-1c1b-4324-a1be-2a0c5dcc6ce5/control-plane-machine-set-operator/0.log" Nov 25 21:29:37 crc kubenswrapper[4983]: I1125 21:29:37.374883 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ztngk_aed03db9-cd2b-4aa5-96d4-de0e00e95842/machine-api-operator/0.log" Nov 25 21:29:37 crc kubenswrapper[4983]: I1125 21:29:37.409541 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ztngk_aed03db9-cd2b-4aa5-96d4-de0e00e95842/kube-rbac-proxy/0.log" Nov 25 21:29:53 crc kubenswrapper[4983]: I1125 21:29:53.077031 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wgvpx_3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a/cert-manager-controller/0.log" Nov 25 21:29:53 crc kubenswrapper[4983]: I1125 21:29:53.162106 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vgj58_a84e28f5-6c16-49c9-aaee-2e1ba4b547a3/cert-manager-cainjector/0.log" Nov 25 21:29:53 crc kubenswrapper[4983]: I1125 21:29:53.245219 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-5xb5k_fea16ade-51b4-491b-acb2-4a3d5974bf0c/cert-manager-webhook/0.log" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.169436 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46"] Nov 25 21:30:00 crc kubenswrapper[4983]: E1125 21:30:00.170619 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fab102e-55ec-4c52-91a7-b3db9d74be29" containerName="container-00" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.170668 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fab102e-55ec-4c52-91a7-b3db9d74be29" containerName="container-00" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.170992 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fab102e-55ec-4c52-91a7-b3db9d74be29" containerName="container-00" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.171992 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.174207 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.175136 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.189796 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46"] Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.308141 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc46n\" (UniqueName: \"kubernetes.io/projected/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-kube-api-access-xc46n\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.308227 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-secret-volume\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.308301 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-config-volume\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.410890 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-secret-volume\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.411023 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-config-volume\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.411262 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc46n\" (UniqueName: \"kubernetes.io/projected/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-kube-api-access-xc46n\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.411970 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-config-volume\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.420635 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-secret-volume\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.432203 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc46n\" (UniqueName: \"kubernetes.io/projected/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-kube-api-access-xc46n\") pod \"collect-profiles-29401770-n6h46\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:00 crc kubenswrapper[4983]: I1125 21:30:00.511206 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:01 crc kubenswrapper[4983]: I1125 21:30:01.030055 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46"] Nov 25 21:30:02 crc kubenswrapper[4983]: I1125 21:30:02.056900 4983 generic.go:334] "Generic (PLEG): container finished" podID="e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" containerID="72d2de0fc6495a365cb67ae58523f11c56e75396b43a9d762d95fb74b80813ee" exitCode=0 Nov 25 21:30:02 crc kubenswrapper[4983]: I1125 21:30:02.057238 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" event={"ID":"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f","Type":"ContainerDied","Data":"72d2de0fc6495a365cb67ae58523f11c56e75396b43a9d762d95fb74b80813ee"} Nov 25 21:30:02 crc kubenswrapper[4983]: I1125 21:30:02.063798 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" event={"ID":"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f","Type":"ContainerStarted","Data":"74adf18eaebf035b49949f7e99a02eaabab7b572e593b4da0abfef508a4b407d"} Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.463634 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.584918 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc46n\" (UniqueName: \"kubernetes.io/projected/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-kube-api-access-xc46n\") pod \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.585137 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-secret-volume\") pod \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.585326 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-config-volume\") pod \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\" (UID: \"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f\") " Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.585954 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-config-volume" (OuterVolumeSpecName: "config-volume") pod "e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" (UID: "e6bcca12-6f02-4c95-8ac2-d7b35880ac7f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.591675 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" (UID: "e6bcca12-6f02-4c95-8ac2-d7b35880ac7f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.591820 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-kube-api-access-xc46n" (OuterVolumeSpecName: "kube-api-access-xc46n") pod "e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" (UID: "e6bcca12-6f02-4c95-8ac2-d7b35880ac7f"). InnerVolumeSpecName "kube-api-access-xc46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.687592 4983 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.687647 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc46n\" (UniqueName: \"kubernetes.io/projected/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-kube-api-access-xc46n\") on node \"crc\" DevicePath \"\"" Nov 25 21:30:03 crc kubenswrapper[4983]: I1125 21:30:03.687667 4983 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6bcca12-6f02-4c95-8ac2-d7b35880ac7f-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 21:30:04 crc kubenswrapper[4983]: I1125 21:30:04.078624 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" event={"ID":"e6bcca12-6f02-4c95-8ac2-d7b35880ac7f","Type":"ContainerDied","Data":"74adf18eaebf035b49949f7e99a02eaabab7b572e593b4da0abfef508a4b407d"} Nov 25 21:30:04 crc kubenswrapper[4983]: I1125 21:30:04.078666 4983 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74adf18eaebf035b49949f7e99a02eaabab7b572e593b4da0abfef508a4b407d" Nov 25 21:30:04 crc kubenswrapper[4983]: I1125 21:30:04.078719 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401770-n6h46" Nov 25 21:30:04 crc kubenswrapper[4983]: I1125 21:30:04.544798 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck"] Nov 25 21:30:04 crc kubenswrapper[4983]: I1125 21:30:04.553603 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401725-zw2ck"] Nov 25 21:30:05 crc kubenswrapper[4983]: I1125 21:30:05.618758 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="361000b8-fec7-4af1-8453-05a888ce3db9" path="/var/lib/kubelet/pods/361000b8-fec7-4af1-8453-05a888ce3db9/volumes" Nov 25 21:30:07 crc kubenswrapper[4983]: I1125 21:30:07.186005 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-fkqdw_cdedcf78-6faf-457d-8817-2d87dc07b913/nmstate-console-plugin/0.log" Nov 25 21:30:07 crc kubenswrapper[4983]: I1125 21:30:07.380956 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q89j2_ab060c60-6a98-4358-a028-e3600d0239f4/nmstate-handler/0.log" Nov 25 21:30:07 crc kubenswrapper[4983]: I1125 21:30:07.425819 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-gz8k6_1064f79e-2d97-4733-a2f2-f5f96b204825/nmstate-metrics/0.log" Nov 25 21:30:07 crc kubenswrapper[4983]: I1125 21:30:07.448300 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-gz8k6_1064f79e-2d97-4733-a2f2-f5f96b204825/kube-rbac-proxy/0.log" Nov 25 21:30:07 crc kubenswrapper[4983]: I1125 21:30:07.741156 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-5tmqq_fdab2d13-eec3-468a-b383-e6bc7e00849f/nmstate-operator/0.log" Nov 25 21:30:07 crc kubenswrapper[4983]: I1125 21:30:07.791884 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-7kqbd_ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4/nmstate-webhook/0.log" Nov 25 21:30:09 crc kubenswrapper[4983]: I1125 21:30:09.927529 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:30:09 crc kubenswrapper[4983]: I1125 21:30:09.927967 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:30:23 crc kubenswrapper[4983]: I1125 21:30:23.884686 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-rfq8m_d63bc930-d0b8-4b74-924f-def9a4c05193/kube-rbac-proxy/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.002485 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-rfq8m_d63bc930-d0b8-4b74-924f-def9a4c05193/controller/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.101391 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.258502 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.266963 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.284606 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.309973 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.479823 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.489928 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.503157 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.522479 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.666435 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.718638 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.720012 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.748365 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/controller/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.926828 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/frr-metrics/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.941041 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/kube-rbac-proxy-frr/0.log" Nov 25 21:30:24 crc kubenswrapper[4983]: I1125 21:30:24.942354 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/kube-rbac-proxy/0.log" Nov 25 21:30:25 crc kubenswrapper[4983]: I1125 21:30:25.163810 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/reloader/0.log" Nov 25 21:30:25 crc kubenswrapper[4983]: I1125 21:30:25.228647 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-6sz7r_4cb9ac50-997a-4361-be38-99a645916356/frr-k8s-webhook-server/0.log" Nov 25 21:30:25 crc kubenswrapper[4983]: I1125 21:30:25.389466 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6dcc87d69d-p8fwj_74baeb7c-21f0-4d1c-9a61-7694f59cc161/manager/3.log" Nov 25 21:30:25 crc kubenswrapper[4983]: I1125 21:30:25.472149 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6dcc87d69d-p8fwj_74baeb7c-21f0-4d1c-9a61-7694f59cc161/manager/2.log" Nov 25 21:30:25 crc kubenswrapper[4983]: I1125 21:30:25.645249 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fdb8c798-tkp7s_668e90f8-b352-4ad3-8965-1394ac68bf45/webhook-server/0.log" Nov 25 21:30:25 crc kubenswrapper[4983]: I1125 21:30:25.799015 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q2pnt_b17632af-d63d-48e6-bbc3-e4056a403b94/kube-rbac-proxy/0.log" Nov 25 21:30:26 crc kubenswrapper[4983]: I1125 21:30:26.353957 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/frr/0.log" Nov 25 21:30:26 crc kubenswrapper[4983]: I1125 21:30:26.362182 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q2pnt_b17632af-d63d-48e6-bbc3-e4056a403b94/speaker/0.log" Nov 25 21:30:39 crc kubenswrapper[4983]: I1125 21:30:39.927976 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:30:39 crc kubenswrapper[4983]: I1125 21:30:39.928504 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:30:39 crc kubenswrapper[4983]: I1125 21:30:39.956766 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/util/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.180546 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/util/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.198033 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/pull/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.252582 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/pull/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.397064 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/pull/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.402131 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/extract/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.453433 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/util/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.551120 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-utilities/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.751890 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-utilities/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.760965 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-content/0.log" Nov 25 21:30:40 crc kubenswrapper[4983]: I1125 21:30:40.786290 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-content/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.011838 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-content/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.046787 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-utilities/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.211271 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-utilities/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.323415 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/registry-server/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.462122 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-content/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.479331 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-content/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.500807 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-utilities/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.681908 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-utilities/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.704725 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-content/0.log" Nov 25 21:30:41 crc kubenswrapper[4983]: I1125 21:30:41.894816 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/util/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.084262 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/pull/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.104598 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/pull/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.129077 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/util/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.407685 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/util/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.409435 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/pull/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.425451 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/extract/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.447988 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/registry-server/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.636810 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kh7rb_168ec053-d5d4-4ebc-956d-429c0d2ff5fb/marketplace-operator/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.657929 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-utilities/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.813763 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-utilities/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.861133 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-content/0.log" Nov 25 21:30:42 crc kubenswrapper[4983]: I1125 21:30:42.876005 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-content/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.018672 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-utilities/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.040624 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-content/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.170126 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/registry-server/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.192868 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-utilities/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.396731 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-content/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.399338 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-content/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.427258 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-utilities/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.565966 4983 scope.go:117] "RemoveContainer" containerID="7dc0cddef90b976b9ecb97d3ea1c4e9a30fc781e08b00d6fd71aa3c548ebb8f3" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.598080 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-utilities/0.log" Nov 25 21:30:43 crc kubenswrapper[4983]: I1125 21:30:43.598340 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-content/0.log" Nov 25 21:30:44 crc kubenswrapper[4983]: I1125 21:30:44.014847 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/registry-server/0.log" Nov 25 21:31:09 crc kubenswrapper[4983]: I1125 21:31:09.927311 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:31:09 crc kubenswrapper[4983]: I1125 21:31:09.927944 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:31:09 crc kubenswrapper[4983]: I1125 21:31:09.927993 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:31:09 crc kubenswrapper[4983]: I1125 21:31:09.928780 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:31:09 crc kubenswrapper[4983]: I1125 21:31:09.928845 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" gracePeriod=600 Nov 25 21:31:10 crc kubenswrapper[4983]: E1125 21:31:10.051708 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:31:10 crc kubenswrapper[4983]: I1125 21:31:10.132690 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" exitCode=0 Nov 25 21:31:10 crc kubenswrapper[4983]: I1125 21:31:10.132737 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f"} Nov 25 21:31:10 crc kubenswrapper[4983]: I1125 21:31:10.132777 4983 scope.go:117] "RemoveContainer" containerID="1093988904d20263a90201b32a3efa1b3b7f1044a5fe733ac6c401df50e9c93c" Nov 25 21:31:10 crc kubenswrapper[4983]: I1125 21:31:10.133338 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:31:10 crc kubenswrapper[4983]: E1125 21:31:10.133625 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:31:24 crc kubenswrapper[4983]: I1125 21:31:24.605895 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:31:24 crc kubenswrapper[4983]: E1125 21:31:24.607133 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:31:36 crc kubenswrapper[4983]: I1125 21:31:36.605265 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:31:36 crc kubenswrapper[4983]: E1125 21:31:36.606336 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:31:50 crc kubenswrapper[4983]: I1125 21:31:50.606975 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:31:50 crc kubenswrapper[4983]: E1125 21:31:50.608014 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:32:00 crc kubenswrapper[4983]: I1125 21:32:00.932232 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rmvr5"] Nov 25 21:32:00 crc kubenswrapper[4983]: E1125 21:32:00.933228 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" containerName="collect-profiles" Nov 25 21:32:00 crc kubenswrapper[4983]: I1125 21:32:00.933240 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" containerName="collect-profiles" Nov 25 21:32:00 crc kubenswrapper[4983]: I1125 21:32:00.933442 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bcca12-6f02-4c95-8ac2-d7b35880ac7f" containerName="collect-profiles" Nov 25 21:32:00 crc kubenswrapper[4983]: I1125 21:32:00.934940 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:00 crc kubenswrapper[4983]: I1125 21:32:00.950067 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmvr5"] Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.005460 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-catalog-content\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.005512 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-utilities\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.005663 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp2xn\" (UniqueName: \"kubernetes.io/projected/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-kube-api-access-tp2xn\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.107306 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp2xn\" (UniqueName: \"kubernetes.io/projected/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-kube-api-access-tp2xn\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.107473 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-catalog-content\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.107517 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-utilities\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.108110 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-catalog-content\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.108132 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-utilities\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.138430 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp2xn\" (UniqueName: \"kubernetes.io/projected/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-kube-api-access-tp2xn\") pod \"redhat-marketplace-rmvr5\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.301174 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:01 crc kubenswrapper[4983]: I1125 21:32:01.819613 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmvr5"] Nov 25 21:32:02 crc kubenswrapper[4983]: I1125 21:32:02.713590 4983 generic.go:334] "Generic (PLEG): container finished" podID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerID="c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4" exitCode=0 Nov 25 21:32:02 crc kubenswrapper[4983]: I1125 21:32:02.713718 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerDied","Data":"c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4"} Nov 25 21:32:02 crc kubenswrapper[4983]: I1125 21:32:02.713947 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerStarted","Data":"63f2148d002ebe1dfbbbd6199fc9a561b8786dfeb44acbd634061f91f64ef10d"} Nov 25 21:32:02 crc kubenswrapper[4983]: I1125 21:32:02.715489 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:32:03 crc kubenswrapper[4983]: I1125 21:32:03.734399 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerStarted","Data":"9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07"} Nov 25 21:32:04 crc kubenswrapper[4983]: I1125 21:32:04.748521 4983 generic.go:334] "Generic (PLEG): container finished" podID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerID="9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07" exitCode=0 Nov 25 21:32:04 crc kubenswrapper[4983]: I1125 21:32:04.748584 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerDied","Data":"9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07"} Nov 25 21:32:05 crc kubenswrapper[4983]: I1125 21:32:05.611458 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:32:05 crc kubenswrapper[4983]: E1125 21:32:05.612181 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:32:05 crc kubenswrapper[4983]: I1125 21:32:05.758110 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerStarted","Data":"9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2"} Nov 25 21:32:05 crc kubenswrapper[4983]: I1125 21:32:05.780009 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rmvr5" podStartSLOduration=3.25227387 podStartE2EDuration="5.779992957s" podCreationTimestamp="2025-11-25 21:32:00 +0000 UTC" firstStartedPulling="2025-11-25 21:32:02.715303863 +0000 UTC m=+3903.827837255" lastFinishedPulling="2025-11-25 21:32:05.24302292 +0000 UTC m=+3906.355556342" observedRunningTime="2025-11-25 21:32:05.779878464 +0000 UTC m=+3906.892411866" watchObservedRunningTime="2025-11-25 21:32:05.779992957 +0000 UTC m=+3906.892526349" Nov 25 21:32:11 crc kubenswrapper[4983]: I1125 21:32:11.302015 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:11 crc kubenswrapper[4983]: I1125 21:32:11.302588 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:11 crc kubenswrapper[4983]: I1125 21:32:11.358777 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:11 crc kubenswrapper[4983]: I1125 21:32:11.913056 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:11 crc kubenswrapper[4983]: I1125 21:32:11.961694 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmvr5"] Nov 25 21:32:13 crc kubenswrapper[4983]: I1125 21:32:13.853078 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rmvr5" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="registry-server" containerID="cri-o://9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2" gracePeriod=2 Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.288417 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.316660 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-catalog-content\") pod \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.316739 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-utilities\") pod \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.316911 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp2xn\" (UniqueName: \"kubernetes.io/projected/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-kube-api-access-tp2xn\") pod \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\" (UID: \"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1\") " Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.317909 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-utilities" (OuterVolumeSpecName: "utilities") pod "7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" (UID: "7d6bb181-e9ce-4fa3-94df-86af04ecb9a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.325819 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-kube-api-access-tp2xn" (OuterVolumeSpecName: "kube-api-access-tp2xn") pod "7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" (UID: "7d6bb181-e9ce-4fa3-94df-86af04ecb9a1"). InnerVolumeSpecName "kube-api-access-tp2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.348028 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" (UID: "7d6bb181-e9ce-4fa3-94df-86af04ecb9a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.418982 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.419032 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.419043 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp2xn\" (UniqueName: \"kubernetes.io/projected/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1-kube-api-access-tp2xn\") on node \"crc\" DevicePath \"\"" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.868282 4983 generic.go:334] "Generic (PLEG): container finished" podID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerID="9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2" exitCode=0 Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.868364 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rmvr5" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.868355 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerDied","Data":"9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2"} Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.868418 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rmvr5" event={"ID":"7d6bb181-e9ce-4fa3-94df-86af04ecb9a1","Type":"ContainerDied","Data":"63f2148d002ebe1dfbbbd6199fc9a561b8786dfeb44acbd634061f91f64ef10d"} Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.868467 4983 scope.go:117] "RemoveContainer" containerID="9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.902833 4983 scope.go:117] "RemoveContainer" containerID="9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.926168 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmvr5"] Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.933107 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rmvr5"] Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.938613 4983 scope.go:117] "RemoveContainer" containerID="c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.983970 4983 scope.go:117] "RemoveContainer" containerID="9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2" Nov 25 21:32:14 crc kubenswrapper[4983]: E1125 21:32:14.984411 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2\": container with ID starting with 9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2 not found: ID does not exist" containerID="9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.984457 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2"} err="failed to get container status \"9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2\": rpc error: code = NotFound desc = could not find container \"9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2\": container with ID starting with 9e0a824cbb08c5b02dfb6bb810a2ddc45df853386c075a138d519b2cc53950b2 not found: ID does not exist" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.984485 4983 scope.go:117] "RemoveContainer" containerID="9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07" Nov 25 21:32:14 crc kubenswrapper[4983]: E1125 21:32:14.984915 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07\": container with ID starting with 9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07 not found: ID does not exist" containerID="9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.984965 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07"} err="failed to get container status \"9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07\": rpc error: code = NotFound desc = could not find container \"9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07\": container with ID starting with 9da60236f35daa5ee0f00a6a0aaa3f331d6ed0fb48d2fc8416988477cfab7a07 not found: ID does not exist" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.984984 4983 scope.go:117] "RemoveContainer" containerID="c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4" Nov 25 21:32:14 crc kubenswrapper[4983]: E1125 21:32:14.986344 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4\": container with ID starting with c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4 not found: ID does not exist" containerID="c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4" Nov 25 21:32:14 crc kubenswrapper[4983]: I1125 21:32:14.986416 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4"} err="failed to get container status \"c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4\": rpc error: code = NotFound desc = could not find container \"c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4\": container with ID starting with c8938550726ba1c6e341a3381c8e1120ebc3eeafc66d57d80dfaa4b83606e7f4 not found: ID does not exist" Nov 25 21:32:15 crc kubenswrapper[4983]: I1125 21:32:15.617287 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" path="/var/lib/kubelet/pods/7d6bb181-e9ce-4fa3-94df-86af04ecb9a1/volumes" Nov 25 21:32:17 crc kubenswrapper[4983]: I1125 21:32:17.605536 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:32:17 crc kubenswrapper[4983]: E1125 21:32:17.606341 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:32:22 crc kubenswrapper[4983]: I1125 21:32:22.970544 4983 generic.go:334] "Generic (PLEG): container finished" podID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerID="0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044" exitCode=0 Nov 25 21:32:22 crc kubenswrapper[4983]: I1125 21:32:22.971118 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zkppv/must-gather-frc7g" event={"ID":"5a5a15e5-44af-45a8-88d1-6621a706d11f","Type":"ContainerDied","Data":"0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044"} Nov 25 21:32:22 crc kubenswrapper[4983]: I1125 21:32:22.972636 4983 scope.go:117] "RemoveContainer" containerID="0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044" Nov 25 21:32:23 crc kubenswrapper[4983]: I1125 21:32:23.199021 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zkppv_must-gather-frc7g_5a5a15e5-44af-45a8-88d1-6621a706d11f/gather/0.log" Nov 25 21:32:28 crc kubenswrapper[4983]: I1125 21:32:28.606046 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:32:28 crc kubenswrapper[4983]: E1125 21:32:28.608098 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:32:31 crc kubenswrapper[4983]: I1125 21:32:31.845659 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zkppv/must-gather-frc7g"] Nov 25 21:32:31 crc kubenswrapper[4983]: I1125 21:32:31.846352 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zkppv/must-gather-frc7g" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="copy" containerID="cri-o://bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba" gracePeriod=2 Nov 25 21:32:31 crc kubenswrapper[4983]: I1125 21:32:31.855694 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zkppv/must-gather-frc7g"] Nov 25 21:32:32 crc kubenswrapper[4983]: I1125 21:32:32.876518 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zkppv_must-gather-frc7g_5a5a15e5-44af-45a8-88d1-6621a706d11f/copy/0.log" Nov 25 21:32:32 crc kubenswrapper[4983]: I1125 21:32:32.877441 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.052186 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a5a15e5-44af-45a8-88d1-6621a706d11f-must-gather-output\") pod \"5a5a15e5-44af-45a8-88d1-6621a706d11f\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.052523 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxspx\" (UniqueName: \"kubernetes.io/projected/5a5a15e5-44af-45a8-88d1-6621a706d11f-kube-api-access-nxspx\") pod \"5a5a15e5-44af-45a8-88d1-6621a706d11f\" (UID: \"5a5a15e5-44af-45a8-88d1-6621a706d11f\") " Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.058685 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5a15e5-44af-45a8-88d1-6621a706d11f-kube-api-access-nxspx" (OuterVolumeSpecName: "kube-api-access-nxspx") pod "5a5a15e5-44af-45a8-88d1-6621a706d11f" (UID: "5a5a15e5-44af-45a8-88d1-6621a706d11f"). InnerVolumeSpecName "kube-api-access-nxspx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.088724 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zkppv_must-gather-frc7g_5a5a15e5-44af-45a8-88d1-6621a706d11f/copy/0.log" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.089167 4983 generic.go:334] "Generic (PLEG): container finished" podID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerID="bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba" exitCode=143 Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.089225 4983 scope.go:117] "RemoveContainer" containerID="bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.089367 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zkppv/must-gather-frc7g" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.128038 4983 scope.go:117] "RemoveContainer" containerID="0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.154922 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxspx\" (UniqueName: \"kubernetes.io/projected/5a5a15e5-44af-45a8-88d1-6621a706d11f-kube-api-access-nxspx\") on node \"crc\" DevicePath \"\"" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.190997 4983 scope.go:117] "RemoveContainer" containerID="bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba" Nov 25 21:32:33 crc kubenswrapper[4983]: E1125 21:32:33.192468 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba\": container with ID starting with bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba not found: ID does not exist" containerID="bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.192538 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba"} err="failed to get container status \"bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba\": rpc error: code = NotFound desc = could not find container \"bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba\": container with ID starting with bb0e9f0ee21a1ff4397dad429fcdedebd1c2564074479ba39c6f11d589cbc9ba not found: ID does not exist" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.192582 4983 scope.go:117] "RemoveContainer" containerID="0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044" Nov 25 21:32:33 crc kubenswrapper[4983]: E1125 21:32:33.193682 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044\": container with ID starting with 0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044 not found: ID does not exist" containerID="0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.193725 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044"} err="failed to get container status \"0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044\": rpc error: code = NotFound desc = could not find container \"0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044\": container with ID starting with 0b13809b7baa52523c062ccd441c6d34e36e7b46bfd5ed67695c2165c42e2044 not found: ID does not exist" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.210368 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5a15e5-44af-45a8-88d1-6621a706d11f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5a5a15e5-44af-45a8-88d1-6621a706d11f" (UID: "5a5a15e5-44af-45a8-88d1-6621a706d11f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.256760 4983 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5a5a15e5-44af-45a8-88d1-6621a706d11f-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 21:32:33 crc kubenswrapper[4983]: I1125 21:32:33.616509 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" path="/var/lib/kubelet/pods/5a5a15e5-44af-45a8-88d1-6621a706d11f/volumes" Nov 25 21:32:43 crc kubenswrapper[4983]: I1125 21:32:43.605793 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:32:43 crc kubenswrapper[4983]: E1125 21:32:43.607035 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.219060 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wwp9x"] Nov 25 21:32:47 crc kubenswrapper[4983]: E1125 21:32:47.220453 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="copy" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.220478 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="copy" Nov 25 21:32:47 crc kubenswrapper[4983]: E1125 21:32:47.220524 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="extract-utilities" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.220538 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="extract-utilities" Nov 25 21:32:47 crc kubenswrapper[4983]: E1125 21:32:47.220574 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="extract-content" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.220618 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="extract-content" Nov 25 21:32:47 crc kubenswrapper[4983]: E1125 21:32:47.220666 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="gather" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.220679 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="gather" Nov 25 21:32:47 crc kubenswrapper[4983]: E1125 21:32:47.220708 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="registry-server" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.220721 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="registry-server" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.221050 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="copy" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.221116 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6bb181-e9ce-4fa3-94df-86af04ecb9a1" containerName="registry-server" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.221186 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5a15e5-44af-45a8-88d1-6621a706d11f" containerName="gather" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.224291 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.254754 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwp9x"] Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.336961 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznrv\" (UniqueName: \"kubernetes.io/projected/f6f54129-c736-41c7-9899-815464e91893-kube-api-access-wznrv\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.337074 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-catalog-content\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.337198 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-utilities\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.438638 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wznrv\" (UniqueName: \"kubernetes.io/projected/f6f54129-c736-41c7-9899-815464e91893-kube-api-access-wznrv\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.438737 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-catalog-content\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.438833 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-utilities\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.439222 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-catalog-content\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.439257 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-utilities\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.459768 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wznrv\" (UniqueName: \"kubernetes.io/projected/f6f54129-c736-41c7-9899-815464e91893-kube-api-access-wznrv\") pod \"certified-operators-wwp9x\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:47 crc kubenswrapper[4983]: I1125 21:32:47.554435 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:48 crc kubenswrapper[4983]: I1125 21:32:48.145502 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wwp9x"] Nov 25 21:32:48 crc kubenswrapper[4983]: W1125 21:32:48.154354 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6f54129_c736_41c7_9899_815464e91893.slice/crio-8163b4e346d184d2860e17c38ee25049865229cfb8634c7d0c316c5d9b06409a WatchSource:0}: Error finding container 8163b4e346d184d2860e17c38ee25049865229cfb8634c7d0c316c5d9b06409a: Status 404 returned error can't find the container with id 8163b4e346d184d2860e17c38ee25049865229cfb8634c7d0c316c5d9b06409a Nov 25 21:32:48 crc kubenswrapper[4983]: I1125 21:32:48.284599 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerStarted","Data":"8163b4e346d184d2860e17c38ee25049865229cfb8634c7d0c316c5d9b06409a"} Nov 25 21:32:49 crc kubenswrapper[4983]: I1125 21:32:49.297607 4983 generic.go:334] "Generic (PLEG): container finished" podID="f6f54129-c736-41c7-9899-815464e91893" containerID="140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262" exitCode=0 Nov 25 21:32:49 crc kubenswrapper[4983]: I1125 21:32:49.297679 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerDied","Data":"140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262"} Nov 25 21:32:50 crc kubenswrapper[4983]: I1125 21:32:50.309816 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerStarted","Data":"0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f"} Nov 25 21:32:52 crc kubenswrapper[4983]: I1125 21:32:52.334341 4983 generic.go:334] "Generic (PLEG): container finished" podID="f6f54129-c736-41c7-9899-815464e91893" containerID="0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f" exitCode=0 Nov 25 21:32:52 crc kubenswrapper[4983]: I1125 21:32:52.334432 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerDied","Data":"0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f"} Nov 25 21:32:53 crc kubenswrapper[4983]: I1125 21:32:53.352653 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerStarted","Data":"155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff"} Nov 25 21:32:53 crc kubenswrapper[4983]: I1125 21:32:53.395896 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wwp9x" podStartSLOduration=2.940648173 podStartE2EDuration="6.395873961s" podCreationTimestamp="2025-11-25 21:32:47 +0000 UTC" firstStartedPulling="2025-11-25 21:32:49.299694469 +0000 UTC m=+3950.412227861" lastFinishedPulling="2025-11-25 21:32:52.754920247 +0000 UTC m=+3953.867453649" observedRunningTime="2025-11-25 21:32:53.374672523 +0000 UTC m=+3954.487205985" watchObservedRunningTime="2025-11-25 21:32:53.395873961 +0000 UTC m=+3954.508407363" Nov 25 21:32:57 crc kubenswrapper[4983]: I1125 21:32:57.555886 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:57 crc kubenswrapper[4983]: I1125 21:32:57.556839 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:57 crc kubenswrapper[4983]: I1125 21:32:57.648733 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:58 crc kubenswrapper[4983]: I1125 21:32:58.463480 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:32:58 crc kubenswrapper[4983]: I1125 21:32:58.530306 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwp9x"] Nov 25 21:32:58 crc kubenswrapper[4983]: I1125 21:32:58.605747 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:32:58 crc kubenswrapper[4983]: E1125 21:32:58.606601 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:33:00 crc kubenswrapper[4983]: I1125 21:33:00.426624 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wwp9x" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="registry-server" containerID="cri-o://155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff" gracePeriod=2 Nov 25 21:33:00 crc kubenswrapper[4983]: I1125 21:33:00.975151 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.087714 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wznrv\" (UniqueName: \"kubernetes.io/projected/f6f54129-c736-41c7-9899-815464e91893-kube-api-access-wznrv\") pod \"f6f54129-c736-41c7-9899-815464e91893\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.087785 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-utilities\") pod \"f6f54129-c736-41c7-9899-815464e91893\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.087819 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-catalog-content\") pod \"f6f54129-c736-41c7-9899-815464e91893\" (UID: \"f6f54129-c736-41c7-9899-815464e91893\") " Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.089715 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-utilities" (OuterVolumeSpecName: "utilities") pod "f6f54129-c736-41c7-9899-815464e91893" (UID: "f6f54129-c736-41c7-9899-815464e91893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.094818 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6f54129-c736-41c7-9899-815464e91893-kube-api-access-wznrv" (OuterVolumeSpecName: "kube-api-access-wznrv") pod "f6f54129-c736-41c7-9899-815464e91893" (UID: "f6f54129-c736-41c7-9899-815464e91893"). InnerVolumeSpecName "kube-api-access-wznrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.149946 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6f54129-c736-41c7-9899-815464e91893" (UID: "f6f54129-c736-41c7-9899-815464e91893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.190677 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wznrv\" (UniqueName: \"kubernetes.io/projected/f6f54129-c736-41c7-9899-815464e91893-kube-api-access-wznrv\") on node \"crc\" DevicePath \"\"" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.190714 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.190729 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6f54129-c736-41c7-9899-815464e91893-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.437610 4983 generic.go:334] "Generic (PLEG): container finished" podID="f6f54129-c736-41c7-9899-815464e91893" containerID="155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff" exitCode=0 Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.437652 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerDied","Data":"155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff"} Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.437681 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wwp9x" event={"ID":"f6f54129-c736-41c7-9899-815464e91893","Type":"ContainerDied","Data":"8163b4e346d184d2860e17c38ee25049865229cfb8634c7d0c316c5d9b06409a"} Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.437703 4983 scope.go:117] "RemoveContainer" containerID="155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.437821 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wwp9x" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.465388 4983 scope.go:117] "RemoveContainer" containerID="0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.478698 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wwp9x"] Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.493824 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wwp9x"] Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.512058 4983 scope.go:117] "RemoveContainer" containerID="140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.555733 4983 scope.go:117] "RemoveContainer" containerID="155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff" Nov 25 21:33:01 crc kubenswrapper[4983]: E1125 21:33:01.556335 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff\": container with ID starting with 155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff not found: ID does not exist" containerID="155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.556388 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff"} err="failed to get container status \"155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff\": rpc error: code = NotFound desc = could not find container \"155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff\": container with ID starting with 155ce04793519d91b8f0012a6b09230e11c9c350fad8696b869d34df65e80eff not found: ID does not exist" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.556423 4983 scope.go:117] "RemoveContainer" containerID="0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f" Nov 25 21:33:01 crc kubenswrapper[4983]: E1125 21:33:01.556949 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f\": container with ID starting with 0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f not found: ID does not exist" containerID="0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.556981 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f"} err="failed to get container status \"0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f\": rpc error: code = NotFound desc = could not find container \"0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f\": container with ID starting with 0b1fbec8b0bcf7a62c5280c8dcb5819235e213230d3cb8ebd0e5932276e1cf0f not found: ID does not exist" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.557006 4983 scope.go:117] "RemoveContainer" containerID="140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262" Nov 25 21:33:01 crc kubenswrapper[4983]: E1125 21:33:01.557362 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262\": container with ID starting with 140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262 not found: ID does not exist" containerID="140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.557409 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262"} err="failed to get container status \"140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262\": rpc error: code = NotFound desc = could not find container \"140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262\": container with ID starting with 140eb4b7ab4ced3900a178fe6a94e6cd2d4348437744f4fcd9e79f55f9747262 not found: ID does not exist" Nov 25 21:33:01 crc kubenswrapper[4983]: I1125 21:33:01.616259 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6f54129-c736-41c7-9899-815464e91893" path="/var/lib/kubelet/pods/f6f54129-c736-41c7-9899-815464e91893/volumes" Nov 25 21:33:12 crc kubenswrapper[4983]: I1125 21:33:12.605851 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:33:12 crc kubenswrapper[4983]: E1125 21:33:12.607239 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:33:23 crc kubenswrapper[4983]: I1125 21:33:23.604651 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:33:23 crc kubenswrapper[4983]: E1125 21:33:23.607135 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:33:34 crc kubenswrapper[4983]: I1125 21:33:34.604916 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:33:34 crc kubenswrapper[4983]: E1125 21:33:34.605692 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:33:43 crc kubenswrapper[4983]: I1125 21:33:43.723331 4983 scope.go:117] "RemoveContainer" containerID="be81215a5a335193b612d7e0af14668732a91b76236c9b9fb5b2e6e385a5a75e" Nov 25 21:33:48 crc kubenswrapper[4983]: I1125 21:33:48.605010 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:33:48 crc kubenswrapper[4983]: E1125 21:33:48.605707 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:34:03 crc kubenswrapper[4983]: I1125 21:34:03.605465 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:34:03 crc kubenswrapper[4983]: E1125 21:34:03.606348 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:34:17 crc kubenswrapper[4983]: I1125 21:34:17.605870 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:34:17 crc kubenswrapper[4983]: E1125 21:34:17.606766 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:34:28 crc kubenswrapper[4983]: I1125 21:34:28.605713 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:34:28 crc kubenswrapper[4983]: E1125 21:34:28.606470 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:34:39 crc kubenswrapper[4983]: I1125 21:34:39.624322 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:34:39 crc kubenswrapper[4983]: E1125 21:34:39.625712 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:34:53 crc kubenswrapper[4983]: I1125 21:34:53.605284 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:34:53 crc kubenswrapper[4983]: E1125 21:34:53.606246 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.115072 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pljc7/must-gather-zbtz6"] Nov 25 21:35:07 crc kubenswrapper[4983]: E1125 21:35:07.116133 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="registry-server" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.116153 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="registry-server" Nov 25 21:35:07 crc kubenswrapper[4983]: E1125 21:35:07.116204 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="extract-content" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.116213 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="extract-content" Nov 25 21:35:07 crc kubenswrapper[4983]: E1125 21:35:07.116239 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="extract-utilities" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.116248 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="extract-utilities" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.116456 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6f54129-c736-41c7-9899-815464e91893" containerName="registry-server" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.117548 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.119527 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pljc7"/"openshift-service-ca.crt" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.119537 4983 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pljc7"/"default-dockercfg-m55hb" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.119650 4983 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pljc7"/"kube-root-ca.crt" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.149208 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pljc7/must-gather-zbtz6"] Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.226498 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5fw6\" (UniqueName: \"kubernetes.io/projected/ad096a12-9a47-4707-91be-37bcfec628b0-kube-api-access-h5fw6\") pod \"must-gather-zbtz6\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.226598 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad096a12-9a47-4707-91be-37bcfec628b0-must-gather-output\") pod \"must-gather-zbtz6\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.327803 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5fw6\" (UniqueName: \"kubernetes.io/projected/ad096a12-9a47-4707-91be-37bcfec628b0-kube-api-access-h5fw6\") pod \"must-gather-zbtz6\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.327851 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad096a12-9a47-4707-91be-37bcfec628b0-must-gather-output\") pod \"must-gather-zbtz6\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.328404 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad096a12-9a47-4707-91be-37bcfec628b0-must-gather-output\") pod \"must-gather-zbtz6\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.352515 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5fw6\" (UniqueName: \"kubernetes.io/projected/ad096a12-9a47-4707-91be-37bcfec628b0-kube-api-access-h5fw6\") pod \"must-gather-zbtz6\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.436329 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.605713 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:35:07 crc kubenswrapper[4983]: E1125 21:35:07.606238 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:35:07 crc kubenswrapper[4983]: I1125 21:35:07.883975 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pljc7/must-gather-zbtz6"] Nov 25 21:35:08 crc kubenswrapper[4983]: I1125 21:35:08.806875 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/must-gather-zbtz6" event={"ID":"ad096a12-9a47-4707-91be-37bcfec628b0","Type":"ContainerStarted","Data":"750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb"} Nov 25 21:35:08 crc kubenswrapper[4983]: I1125 21:35:08.807492 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/must-gather-zbtz6" event={"ID":"ad096a12-9a47-4707-91be-37bcfec628b0","Type":"ContainerStarted","Data":"402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8"} Nov 25 21:35:08 crc kubenswrapper[4983]: I1125 21:35:08.807514 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/must-gather-zbtz6" event={"ID":"ad096a12-9a47-4707-91be-37bcfec628b0","Type":"ContainerStarted","Data":"a55d4c58e9874f2b4c99e46ab35081fd699e700af991ef8848cd7d8262fddda9"} Nov 25 21:35:08 crc kubenswrapper[4983]: I1125 21:35:08.832915 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pljc7/must-gather-zbtz6" podStartSLOduration=1.832886458 podStartE2EDuration="1.832886458s" podCreationTimestamp="2025-11-25 21:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 21:35:08.826670322 +0000 UTC m=+4089.939203754" watchObservedRunningTime="2025-11-25 21:35:08.832886458 +0000 UTC m=+4089.945419890" Nov 25 21:35:10 crc kubenswrapper[4983]: E1125 21:35:10.932856 4983 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.173:46136->38.102.83.173:46005: read tcp 38.102.83.173:46136->38.102.83.173:46005: read: connection reset by peer Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.586620 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pljc7/crc-debug-tpdxk"] Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.588492 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.720998 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjx8\" (UniqueName: \"kubernetes.io/projected/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-kube-api-access-kvjx8\") pod \"crc-debug-tpdxk\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.721273 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-host\") pod \"crc-debug-tpdxk\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.824779 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvjx8\" (UniqueName: \"kubernetes.io/projected/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-kube-api-access-kvjx8\") pod \"crc-debug-tpdxk\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.825110 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-host\") pod \"crc-debug-tpdxk\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.825232 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-host\") pod \"crc-debug-tpdxk\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.852989 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvjx8\" (UniqueName: \"kubernetes.io/projected/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-kube-api-access-kvjx8\") pod \"crc-debug-tpdxk\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: I1125 21:35:11.911083 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:11 crc kubenswrapper[4983]: W1125 21:35:11.945866 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42ac2d6d_4315_4648_b8e4_b59efb3cbcff.slice/crio-ff39071b9b0b7d96260e25ce8266578b6149939a1155b04fe08538971b34b8c2 WatchSource:0}: Error finding container ff39071b9b0b7d96260e25ce8266578b6149939a1155b04fe08538971b34b8c2: Status 404 returned error can't find the container with id ff39071b9b0b7d96260e25ce8266578b6149939a1155b04fe08538971b34b8c2 Nov 25 21:35:12 crc kubenswrapper[4983]: I1125 21:35:12.862867 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" event={"ID":"42ac2d6d-4315-4648-b8e4-b59efb3cbcff","Type":"ContainerStarted","Data":"c6118faa0570efd9760d847c56171ddbd39e20ee16ce4b87b80a4bdd3e5ec22b"} Nov 25 21:35:12 crc kubenswrapper[4983]: I1125 21:35:12.863128 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" event={"ID":"42ac2d6d-4315-4648-b8e4-b59efb3cbcff","Type":"ContainerStarted","Data":"ff39071b9b0b7d96260e25ce8266578b6149939a1155b04fe08538971b34b8c2"} Nov 25 21:35:12 crc kubenswrapper[4983]: I1125 21:35:12.876774 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" podStartSLOduration=1.876749598 podStartE2EDuration="1.876749598s" podCreationTimestamp="2025-11-25 21:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 21:35:12.87382831 +0000 UTC m=+4093.986361722" watchObservedRunningTime="2025-11-25 21:35:12.876749598 +0000 UTC m=+4093.989283030" Nov 25 21:35:18 crc kubenswrapper[4983]: I1125 21:35:18.605157 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:35:18 crc kubenswrapper[4983]: E1125 21:35:18.606026 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:35:32 crc kubenswrapper[4983]: I1125 21:35:32.605999 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:35:32 crc kubenswrapper[4983]: E1125 21:35:32.606745 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:35:45 crc kubenswrapper[4983]: I1125 21:35:45.609418 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:35:45 crc kubenswrapper[4983]: E1125 21:35:45.610236 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:35:48 crc kubenswrapper[4983]: I1125 21:35:48.165039 4983 generic.go:334] "Generic (PLEG): container finished" podID="42ac2d6d-4315-4648-b8e4-b59efb3cbcff" containerID="c6118faa0570efd9760d847c56171ddbd39e20ee16ce4b87b80a4bdd3e5ec22b" exitCode=0 Nov 25 21:35:48 crc kubenswrapper[4983]: I1125 21:35:48.165129 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" event={"ID":"42ac2d6d-4315-4648-b8e4-b59efb3cbcff","Type":"ContainerDied","Data":"c6118faa0570efd9760d847c56171ddbd39e20ee16ce4b87b80a4bdd3e5ec22b"} Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.287992 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.324834 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pljc7/crc-debug-tpdxk"] Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.333195 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pljc7/crc-debug-tpdxk"] Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.417744 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-host\") pod \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.417871 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvjx8\" (UniqueName: \"kubernetes.io/projected/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-kube-api-access-kvjx8\") pod \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\" (UID: \"42ac2d6d-4315-4648-b8e4-b59efb3cbcff\") " Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.417897 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-host" (OuterVolumeSpecName: "host") pod "42ac2d6d-4315-4648-b8e4-b59efb3cbcff" (UID: "42ac2d6d-4315-4648-b8e4-b59efb3cbcff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.418478 4983 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-host\") on node \"crc\" DevicePath \"\"" Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.424770 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-kube-api-access-kvjx8" (OuterVolumeSpecName: "kube-api-access-kvjx8") pod "42ac2d6d-4315-4648-b8e4-b59efb3cbcff" (UID: "42ac2d6d-4315-4648-b8e4-b59efb3cbcff"). InnerVolumeSpecName "kube-api-access-kvjx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.520643 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvjx8\" (UniqueName: \"kubernetes.io/projected/42ac2d6d-4315-4648-b8e4-b59efb3cbcff-kube-api-access-kvjx8\") on node \"crc\" DevicePath \"\"" Nov 25 21:35:49 crc kubenswrapper[4983]: I1125 21:35:49.618759 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ac2d6d-4315-4648-b8e4-b59efb3cbcff" path="/var/lib/kubelet/pods/42ac2d6d-4315-4648-b8e4-b59efb3cbcff/volumes" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.186952 4983 scope.go:117] "RemoveContainer" containerID="c6118faa0570efd9760d847c56171ddbd39e20ee16ce4b87b80a4bdd3e5ec22b" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.187001 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-tpdxk" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.508904 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pljc7/crc-debug-4ngb9"] Nov 25 21:35:50 crc kubenswrapper[4983]: E1125 21:35:50.509333 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ac2d6d-4315-4648-b8e4-b59efb3cbcff" containerName="container-00" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.509352 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ac2d6d-4315-4648-b8e4-b59efb3cbcff" containerName="container-00" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.509671 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ac2d6d-4315-4648-b8e4-b59efb3cbcff" containerName="container-00" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.510351 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.641541 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhthc\" (UniqueName: \"kubernetes.io/projected/32be172b-f623-41f6-97e3-2fd0152373ee-kube-api-access-lhthc\") pod \"crc-debug-4ngb9\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.641785 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32be172b-f623-41f6-97e3-2fd0152373ee-host\") pod \"crc-debug-4ngb9\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.744110 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhthc\" (UniqueName: \"kubernetes.io/projected/32be172b-f623-41f6-97e3-2fd0152373ee-kube-api-access-lhthc\") pod \"crc-debug-4ngb9\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.744548 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32be172b-f623-41f6-97e3-2fd0152373ee-host\") pod \"crc-debug-4ngb9\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.745505 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32be172b-f623-41f6-97e3-2fd0152373ee-host\") pod \"crc-debug-4ngb9\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:50 crc kubenswrapper[4983]: I1125 21:35:50.977280 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhthc\" (UniqueName: \"kubernetes.io/projected/32be172b-f623-41f6-97e3-2fd0152373ee-kube-api-access-lhthc\") pod \"crc-debug-4ngb9\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:51 crc kubenswrapper[4983]: I1125 21:35:51.136821 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:51 crc kubenswrapper[4983]: W1125 21:35:51.187035 4983 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32be172b_f623_41f6_97e3_2fd0152373ee.slice/crio-aa36f4fd0705a83a44467b1ca201f13a2136f09ff5fffbb89213898b5163d820 WatchSource:0}: Error finding container aa36f4fd0705a83a44467b1ca201f13a2136f09ff5fffbb89213898b5163d820: Status 404 returned error can't find the container with id aa36f4fd0705a83a44467b1ca201f13a2136f09ff5fffbb89213898b5163d820 Nov 25 21:35:51 crc kubenswrapper[4983]: I1125 21:35:51.207532 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-4ngb9" event={"ID":"32be172b-f623-41f6-97e3-2fd0152373ee","Type":"ContainerStarted","Data":"aa36f4fd0705a83a44467b1ca201f13a2136f09ff5fffbb89213898b5163d820"} Nov 25 21:35:52 crc kubenswrapper[4983]: I1125 21:35:52.219865 4983 generic.go:334] "Generic (PLEG): container finished" podID="32be172b-f623-41f6-97e3-2fd0152373ee" containerID="24999d545ce8bd31a3ac292a617f8bfcaeb1efb0cf440db9ff78b4cc5f01f5ba" exitCode=0 Nov 25 21:35:52 crc kubenswrapper[4983]: I1125 21:35:52.220136 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-4ngb9" event={"ID":"32be172b-f623-41f6-97e3-2fd0152373ee","Type":"ContainerDied","Data":"24999d545ce8bd31a3ac292a617f8bfcaeb1efb0cf440db9ff78b4cc5f01f5ba"} Nov 25 21:35:52 crc kubenswrapper[4983]: I1125 21:35:52.711572 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pljc7/crc-debug-4ngb9"] Nov 25 21:35:52 crc kubenswrapper[4983]: I1125 21:35:52.721915 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pljc7/crc-debug-4ngb9"] Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.365164 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.502377 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhthc\" (UniqueName: \"kubernetes.io/projected/32be172b-f623-41f6-97e3-2fd0152373ee-kube-api-access-lhthc\") pod \"32be172b-f623-41f6-97e3-2fd0152373ee\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.502584 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32be172b-f623-41f6-97e3-2fd0152373ee-host\") pod \"32be172b-f623-41f6-97e3-2fd0152373ee\" (UID: \"32be172b-f623-41f6-97e3-2fd0152373ee\") " Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.502671 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32be172b-f623-41f6-97e3-2fd0152373ee-host" (OuterVolumeSpecName: "host") pod "32be172b-f623-41f6-97e3-2fd0152373ee" (UID: "32be172b-f623-41f6-97e3-2fd0152373ee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.503075 4983 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32be172b-f623-41f6-97e3-2fd0152373ee-host\") on node \"crc\" DevicePath \"\"" Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.572538 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32be172b-f623-41f6-97e3-2fd0152373ee-kube-api-access-lhthc" (OuterVolumeSpecName: "kube-api-access-lhthc") pod "32be172b-f623-41f6-97e3-2fd0152373ee" (UID: "32be172b-f623-41f6-97e3-2fd0152373ee"). InnerVolumeSpecName "kube-api-access-lhthc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.604477 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhthc\" (UniqueName: \"kubernetes.io/projected/32be172b-f623-41f6-97e3-2fd0152373ee-kube-api-access-lhthc\") on node \"crc\" DevicePath \"\"" Nov 25 21:35:53 crc kubenswrapper[4983]: I1125 21:35:53.621003 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32be172b-f623-41f6-97e3-2fd0152373ee" path="/var/lib/kubelet/pods/32be172b-f623-41f6-97e3-2fd0152373ee/volumes" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.178698 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pljc7/crc-debug-drf62"] Nov 25 21:35:54 crc kubenswrapper[4983]: E1125 21:35:54.179360 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32be172b-f623-41f6-97e3-2fd0152373ee" containerName="container-00" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.179377 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="32be172b-f623-41f6-97e3-2fd0152373ee" containerName="container-00" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.179589 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="32be172b-f623-41f6-97e3-2fd0152373ee" containerName="container-00" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.180174 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.243627 4983 scope.go:117] "RemoveContainer" containerID="24999d545ce8bd31a3ac292a617f8bfcaeb1efb0cf440db9ff78b4cc5f01f5ba" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.243750 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-4ngb9" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.319581 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x7jt\" (UniqueName: \"kubernetes.io/projected/dadef7ab-0a7f-434d-9345-4154f92eb72a-kube-api-access-2x7jt\") pod \"crc-debug-drf62\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.319639 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dadef7ab-0a7f-434d-9345-4154f92eb72a-host\") pod \"crc-debug-drf62\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.421611 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x7jt\" (UniqueName: \"kubernetes.io/projected/dadef7ab-0a7f-434d-9345-4154f92eb72a-kube-api-access-2x7jt\") pod \"crc-debug-drf62\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.421692 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dadef7ab-0a7f-434d-9345-4154f92eb72a-host\") pod \"crc-debug-drf62\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.421835 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dadef7ab-0a7f-434d-9345-4154f92eb72a-host\") pod \"crc-debug-drf62\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.443516 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x7jt\" (UniqueName: \"kubernetes.io/projected/dadef7ab-0a7f-434d-9345-4154f92eb72a-kube-api-access-2x7jt\") pod \"crc-debug-drf62\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:54 crc kubenswrapper[4983]: I1125 21:35:54.506900 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:55 crc kubenswrapper[4983]: I1125 21:35:55.279046 4983 generic.go:334] "Generic (PLEG): container finished" podID="dadef7ab-0a7f-434d-9345-4154f92eb72a" containerID="0db2c5a24c69dbf94e72b2bfc2842f965314eb0bb42eb83875201b8ecd3e4351" exitCode=0 Nov 25 21:35:55 crc kubenswrapper[4983]: I1125 21:35:55.279274 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-drf62" event={"ID":"dadef7ab-0a7f-434d-9345-4154f92eb72a","Type":"ContainerDied","Data":"0db2c5a24c69dbf94e72b2bfc2842f965314eb0bb42eb83875201b8ecd3e4351"} Nov 25 21:35:55 crc kubenswrapper[4983]: I1125 21:35:55.279567 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/crc-debug-drf62" event={"ID":"dadef7ab-0a7f-434d-9345-4154f92eb72a","Type":"ContainerStarted","Data":"cb10f8c9a1130494c5ce06616e2bb2291ca067a0b2bf9c7f238dfce8ec0c2e40"} Nov 25 21:35:55 crc kubenswrapper[4983]: I1125 21:35:55.352504 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pljc7/crc-debug-drf62"] Nov 25 21:35:55 crc kubenswrapper[4983]: I1125 21:35:55.364507 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pljc7/crc-debug-drf62"] Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.409874 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.560262 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x7jt\" (UniqueName: \"kubernetes.io/projected/dadef7ab-0a7f-434d-9345-4154f92eb72a-kube-api-access-2x7jt\") pod \"dadef7ab-0a7f-434d-9345-4154f92eb72a\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.560459 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dadef7ab-0a7f-434d-9345-4154f92eb72a-host\") pod \"dadef7ab-0a7f-434d-9345-4154f92eb72a\" (UID: \"dadef7ab-0a7f-434d-9345-4154f92eb72a\") " Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.560631 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dadef7ab-0a7f-434d-9345-4154f92eb72a-host" (OuterVolumeSpecName: "host") pod "dadef7ab-0a7f-434d-9345-4154f92eb72a" (UID: "dadef7ab-0a7f-434d-9345-4154f92eb72a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.560928 4983 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dadef7ab-0a7f-434d-9345-4154f92eb72a-host\") on node \"crc\" DevicePath \"\"" Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.564913 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dadef7ab-0a7f-434d-9345-4154f92eb72a-kube-api-access-2x7jt" (OuterVolumeSpecName: "kube-api-access-2x7jt") pod "dadef7ab-0a7f-434d-9345-4154f92eb72a" (UID: "dadef7ab-0a7f-434d-9345-4154f92eb72a"). InnerVolumeSpecName "kube-api-access-2x7jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.604638 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:35:56 crc kubenswrapper[4983]: E1125 21:35:56.605016 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:35:56 crc kubenswrapper[4983]: I1125 21:35:56.662984 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x7jt\" (UniqueName: \"kubernetes.io/projected/dadef7ab-0a7f-434d-9345-4154f92eb72a-kube-api-access-2x7jt\") on node \"crc\" DevicePath \"\"" Nov 25 21:35:57 crc kubenswrapper[4983]: I1125 21:35:57.305094 4983 scope.go:117] "RemoveContainer" containerID="0db2c5a24c69dbf94e72b2bfc2842f965314eb0bb42eb83875201b8ecd3e4351" Nov 25 21:35:57 crc kubenswrapper[4983]: I1125 21:35:57.305194 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/crc-debug-drf62" Nov 25 21:35:57 crc kubenswrapper[4983]: I1125 21:35:57.617524 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dadef7ab-0a7f-434d-9345-4154f92eb72a" path="/var/lib/kubelet/pods/dadef7ab-0a7f-434d-9345-4154f92eb72a/volumes" Nov 25 21:36:08 crc kubenswrapper[4983]: I1125 21:36:08.606605 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:36:08 crc kubenswrapper[4983]: E1125 21:36:08.607267 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:36:18 crc kubenswrapper[4983]: I1125 21:36:18.776757 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b69f99f98-bmrwt_d239c72e-850f-45f1-9f9f-568c2bee1546/barbican-api/0.log" Nov 25 21:36:18 crc kubenswrapper[4983]: I1125 21:36:18.806776 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b69f99f98-bmrwt_d239c72e-850f-45f1-9f9f-568c2bee1546/barbican-api-log/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.010939 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d94b4b49b-7bcmx_067348dd-7070-4616-871c-46a8ec91be00/barbican-keystone-listener-log/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.018383 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d94b4b49b-7bcmx_067348dd-7070-4616-871c-46a8ec91be00/barbican-keystone-listener/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.052882 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64d7f8cd7f-49776_d50f667a-e040-4db9-83d1-a1f72b138332/barbican-worker/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.221016 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-64d7f8cd7f-49776_d50f667a-e040-4db9-83d1-a1f72b138332/barbican-worker-log/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.246769 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-58w6h_96bb1f23-94d5-4a68-995b-da2394c75158/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.442545 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/ceilometer-notification-agent/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.451222 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/ceilometer-central-agent/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.468178 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/proxy-httpd/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.508260 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a0a145c6-0515-4cd9-98d1-438f069496e8/sg-core/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.723722 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_af04822c-335c-4c44-9711-19c401c54c9f/cinder-api-log/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.724076 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_af04822c-335c-4c44-9711-19c401c54c9f/cinder-api/0.log" Nov 25 21:36:19 crc kubenswrapper[4983]: I1125 21:36:19.930935 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb5e604e-0461-4f5c-acd3-412096243892/probe/0.log" Nov 25 21:36:20 crc kubenswrapper[4983]: I1125 21:36:20.011691 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_cb5e604e-0461-4f5c-acd3-412096243892/cinder-scheduler/0.log" Nov 25 21:36:20 crc kubenswrapper[4983]: I1125 21:36:20.048292 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-bft2l_0cc000c0-25d9-4390-b50f-da1ba38b6f7c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:20 crc kubenswrapper[4983]: I1125 21:36:20.855889 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6dmmp_da7ae86f-6623-4fd0-b7f1-ad16a2056571/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:20 crc kubenswrapper[4983]: I1125 21:36:20.901083 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-qb46c_1be61955-ba9d-4fef-8bb8-41bae01eb8a2/init/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.042051 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-qb46c_1be61955-ba9d-4fef-8bb8-41bae01eb8a2/init/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.115611 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-qb46c_1be61955-ba9d-4fef-8bb8-41bae01eb8a2/dnsmasq-dns/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.124082 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jxb6q_b4a4fbda-b4ec-4ca9-bfc3-b6e9f76a2d32/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.316128 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_76343139-3638-4cc2-a865-ddb20d2d35a6/glance-log/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.336842 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_76343139-3638-4cc2-a865-ddb20d2d35a6/glance-httpd/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.456396 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e2eefb7f-341f-4f91-8b67-2fc45217b414/glance-httpd/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.527745 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e2eefb7f-341f-4f91-8b67-2fc45217b414/glance-log/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.781166 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vhq5v_e24ee2d5-4f5b-4102-a0ca-45f7aed3c7b8/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:21 crc kubenswrapper[4983]: I1125 21:36:21.794308 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7f9d7c8cfb-s259l_ed474a92-4901-4ded-89c1-736427d72c92/horizon/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.032143 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jt8gn_aa0b2190-bcf1-4f2a-8e87-4805b514d3bf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.119729 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7f9d7c8cfb-s259l_ed474a92-4901-4ded-89c1-736427d72c92/horizon-log/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.317261 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401741-hmw9c_294b565c-28a4-45a2-a9af-8eefed6b82a4/keystone-cron/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.388480 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7b58ff8778-fz55h_9aca410d-c0fd-4ba7-81c0-434416f8dfbd/keystone-api/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.398931 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ae259426-d08e-4d8f-b3e7-f06847f1c2da/kube-state-metrics/3.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.489975 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ae259426-d08e-4d8f-b3e7-f06847f1c2da/kube-state-metrics/2.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.574357 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-tjz9c_4de4d7c6-ee24-4f8e-97c6-d15a5cd43e90/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.605658 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.883792 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-778b5c8885-ww4ht_14a0dfa8-a664-45d8-bb1d-731f807b1427/neutron-httpd/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.955509 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-g4qph_996735a0-8e3c-4c62-9403-3e02669b7c63/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:22 crc kubenswrapper[4983]: I1125 21:36:22.966499 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-778b5c8885-ww4ht_14a0dfa8-a664-45d8-bb1d-731f807b1427/neutron-api/0.log" Nov 25 21:36:23 crc kubenswrapper[4983]: I1125 21:36:23.558493 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"f5e935aa4c18062ea9c0850830cdd7bce9f90f4526f93c77397efbe4e20c1833"} Nov 25 21:36:23 crc kubenswrapper[4983]: I1125 21:36:23.632965 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b199cea8-e855-4316-b80b-8cad8bce9f45/nova-api-log/0.log" Nov 25 21:36:23 crc kubenswrapper[4983]: I1125 21:36:23.730694 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_279216cc-b7af-430b-95ff-07b9330eea8c/nova-cell0-conductor-conductor/0.log" Nov 25 21:36:23 crc kubenswrapper[4983]: I1125 21:36:23.963671 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3eb68b08-479a-4831-b6a5-ad478a3922e5/nova-cell1-conductor-conductor/0.log" Nov 25 21:36:24 crc kubenswrapper[4983]: I1125 21:36:24.229255 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-b7m44_7ce9c984-8450-479b-aa5f-58f81943cf56/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:24 crc kubenswrapper[4983]: I1125 21:36:24.244801 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b199cea8-e855-4316-b80b-8cad8bce9f45/nova-api-api/0.log" Nov 25 21:36:24 crc kubenswrapper[4983]: I1125 21:36:24.280477 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_968ee4da-4360-486b-a70a-a805a19a6b42/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 21:36:24 crc kubenswrapper[4983]: I1125 21:36:24.632438 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_eba27c66-d8be-4e3c-a39c-2f521c69a3d6/nova-metadata-log/0.log" Nov 25 21:36:24 crc kubenswrapper[4983]: I1125 21:36:24.836509 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_13cd3da7-02fa-42c2-a62a-527df23e92b1/mysql-bootstrap/0.log" Nov 25 21:36:24 crc kubenswrapper[4983]: I1125 21:36:24.892106 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_0aa32390-cf93-44b1-b27f-4b66ffb61a41/nova-scheduler-scheduler/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.021446 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_13cd3da7-02fa-42c2-a62a-527df23e92b1/mysql-bootstrap/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.047438 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_13cd3da7-02fa-42c2-a62a-527df23e92b1/galera/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.281910 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca63c157-60df-45de-854f-03989f565e8f/mysql-bootstrap/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.505844 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca63c157-60df-45de-854f-03989f565e8f/mysql-bootstrap/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.536371 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ca63c157-60df-45de-854f-03989f565e8f/galera/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.698201 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3b2fefe1-596f-4e7c-8de9-b3c019ed40ea/openstackclient/0.log" Nov 25 21:36:25 crc kubenswrapper[4983]: I1125 21:36:25.736778 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fgn5f_cd8b1052-9050-4771-8be4-3138d9c54d62/ovn-controller/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.018423 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_eba27c66-d8be-4e3c-a39c-2f521c69a3d6/nova-metadata-metadata/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.065956 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jxnnh_18caf88a-0da7-4144-9c11-301f0a49f3fb/openstack-network-exporter/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.211794 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovsdb-server-init/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.392696 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovsdb-server-init/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.401382 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovs-vswitchd/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.429922 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-47bp7_1ab2fd6b-f417-4b0e-b1ac-d374d64b7712/ovsdb-server/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.606748 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b06f6c03-dbba-48c9-901d-8cf6ef8048b1/ovn-northd/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.726670 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qkd9n_71b3a358-6645-404b-8d14-cb6371e7fce4/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.769504 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b06f6c03-dbba-48c9-901d-8cf6ef8048b1/openstack-network-exporter/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.845445 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_255fbb78-ee7b-4e1f-bd48-d260792d9be4/openstack-network-exporter/0.log" Nov 25 21:36:26 crc kubenswrapper[4983]: I1125 21:36:26.954661 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_255fbb78-ee7b-4e1f-bd48-d260792d9be4/ovsdbserver-nb/0.log" Nov 25 21:36:27 crc kubenswrapper[4983]: I1125 21:36:27.022249 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_410c54ac-f4e0-4c9f-873e-939b19eb303b/openstack-network-exporter/0.log" Nov 25 21:36:27 crc kubenswrapper[4983]: I1125 21:36:27.057094 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_410c54ac-f4e0-4c9f-873e-939b19eb303b/ovsdbserver-sb/0.log" Nov 25 21:36:27 crc kubenswrapper[4983]: I1125 21:36:27.331753 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffcbfd47b-hljtd_545d9a00-2ce8-463f-b16c-6b7c0ac426be/placement-api/0.log" Nov 25 21:36:27 crc kubenswrapper[4983]: I1125 21:36:27.384455 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ffcbfd47b-hljtd_545d9a00-2ce8-463f-b16c-6b7c0ac426be/placement-log/0.log" Nov 25 21:36:27 crc kubenswrapper[4983]: I1125 21:36:27.412412 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5063408-1226-4adc-86e9-194a32761df9/setup-container/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.157627 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5063408-1226-4adc-86e9-194a32761df9/setup-container/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.216055 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e5063408-1226-4adc-86e9-194a32761df9/rabbitmq/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.222880 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df26c674-505f-44d6-9fd2-24d745739946/setup-container/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.435897 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df26c674-505f-44d6-9fd2-24d745739946/setup-container/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.449731 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_df26c674-505f-44d6-9fd2-24d745739946/rabbitmq/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.514315 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-25sjt_11f44fe7-6b39-418d-9c54-d0f05318f412/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.658466 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5csrn_f1c31e5c-0dca-4993-b845-286b47b3b6ee/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.813457 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mw4dz_599f17a5-8483-4c0e-aca0-27677abeba08/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.887660 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6lsbm_283ae6fd-423e-4c78-9c5b-85aab813c0b5/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:28 crc kubenswrapper[4983]: I1125 21:36:28.993882 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cdnxr_3080e73e-fbc1-4a80-827c-386f923dd01b/ssh-known-hosts-edpm-deployment/0.log" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.268343 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6466c6df55-xffs5_b05ecf5f-8220-40f4-b459-27d2dd7c6fbf/proxy-httpd/0.log" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.273818 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6466c6df55-xffs5_b05ecf5f-8220-40f4-b459-27d2dd7c6fbf/proxy-server/0.log" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.331945 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-rln2d_9681d7cb-ab2c-4458-bc07-a7d278f16fd2/swift-ring-rebalance/0.log" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.523357 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-reaper/0.log" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.528008 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-auditor/0.log" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.660247 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nxr5p"] Nov 25 21:36:29 crc kubenswrapper[4983]: E1125 21:36:29.660567 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadef7ab-0a7f-434d-9345-4154f92eb72a" containerName="container-00" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.660580 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadef7ab-0a7f-434d-9345-4154f92eb72a" containerName="container-00" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.660796 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadef7ab-0a7f-434d-9345-4154f92eb72a" containerName="container-00" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.662139 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxr5p"] Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.662233 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.821987 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-utilities\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.822120 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968xh\" (UniqueName: \"kubernetes.io/projected/27eee909-a0a0-4040-b765-1c7f0db4beb1-kube-api-access-968xh\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.822164 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-catalog-content\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.923516 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-utilities\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.923640 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-968xh\" (UniqueName: \"kubernetes.io/projected/27eee909-a0a0-4040-b765-1c7f0db4beb1-kube-api-access-968xh\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.923693 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-catalog-content\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.924345 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-catalog-content\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.924463 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-utilities\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.949839 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-968xh\" (UniqueName: \"kubernetes.io/projected/27eee909-a0a0-4040-b765-1c7f0db4beb1-kube-api-access-968xh\") pod \"community-operators-nxr5p\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:29 crc kubenswrapper[4983]: I1125 21:36:29.987099 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.363438 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-replicator/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.436360 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-replicator/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.436906 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-auditor/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.454632 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/account-server/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.568050 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxr5p"] Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.596519 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-server/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.687591 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxr5p" event={"ID":"27eee909-a0a0-4040-b765-1c7f0db4beb1","Type":"ContainerStarted","Data":"6b0d5c7f6ec9466d8976c0bbe44d53e0476c422ca93ca87be0ef43fdee2ef223"} Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.712122 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/container-updater/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.722847 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-auditor/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.731136 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-expirer/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.856378 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-replicator/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.875081 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-server/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.914761 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/object-updater/0.log" Nov 25 21:36:30 crc kubenswrapper[4983]: I1125 21:36:30.931104 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/rsync/0.log" Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.095247 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_214288a7-ce6d-4844-b3f6-8ab78b7e1b54/swift-recon-cron/0.log" Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.140535 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bdnqm_34445193-9a8d-4ebd-ac42-d8348c11e375/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.312402 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_66868750-3f73-47fe-a353-f88441e69915/tempest-tests-tempest-tests-runner/0.log" Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.334914 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8eafda73-e6d5-4b11-bd89-75308a7ca93b/test-operator-logs-container/0.log" Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.530043 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-4dqgf_5f2f45e7-9dd0-4273-bb23-9191f1a5ea93/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.705933 4983 generic.go:334] "Generic (PLEG): container finished" podID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerID="8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e" exitCode=0 Nov 25 21:36:31 crc kubenswrapper[4983]: I1125 21:36:31.705995 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxr5p" event={"ID":"27eee909-a0a0-4040-b765-1c7f0db4beb1","Type":"ContainerDied","Data":"8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e"} Nov 25 21:36:33 crc kubenswrapper[4983]: I1125 21:36:33.733052 4983 generic.go:334] "Generic (PLEG): container finished" podID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerID="68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b" exitCode=0 Nov 25 21:36:33 crc kubenswrapper[4983]: I1125 21:36:33.733419 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxr5p" event={"ID":"27eee909-a0a0-4040-b765-1c7f0db4beb1","Type":"ContainerDied","Data":"68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b"} Nov 25 21:36:34 crc kubenswrapper[4983]: I1125 21:36:34.745494 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxr5p" event={"ID":"27eee909-a0a0-4040-b765-1c7f0db4beb1","Type":"ContainerStarted","Data":"2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c"} Nov 25 21:36:34 crc kubenswrapper[4983]: I1125 21:36:34.772515 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nxr5p" podStartSLOduration=3.304823142 podStartE2EDuration="5.77249337s" podCreationTimestamp="2025-11-25 21:36:29 +0000 UTC" firstStartedPulling="2025-11-25 21:36:31.70905333 +0000 UTC m=+4172.821586722" lastFinishedPulling="2025-11-25 21:36:34.176723548 +0000 UTC m=+4175.289256950" observedRunningTime="2025-11-25 21:36:34.765964425 +0000 UTC m=+4175.878497837" watchObservedRunningTime="2025-11-25 21:36:34.77249337 +0000 UTC m=+4175.885026762" Nov 25 21:36:39 crc kubenswrapper[4983]: I1125 21:36:39.987751 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:39 crc kubenswrapper[4983]: I1125 21:36:39.988334 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:40 crc kubenswrapper[4983]: I1125 21:36:40.036247 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:40 crc kubenswrapper[4983]: I1125 21:36:40.857673 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:40 crc kubenswrapper[4983]: I1125 21:36:40.915327 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxr5p"] Nov 25 21:36:42 crc kubenswrapper[4983]: I1125 21:36:42.828209 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nxr5p" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="registry-server" containerID="cri-o://2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c" gracePeriod=2 Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.378069 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.553453 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6ba42cf7-cc02-4214-a4e5-c20d987aed64/memcached/0.log" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.568672 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-968xh\" (UniqueName: \"kubernetes.io/projected/27eee909-a0a0-4040-b765-1c7f0db4beb1-kube-api-access-968xh\") pod \"27eee909-a0a0-4040-b765-1c7f0db4beb1\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.568725 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-catalog-content\") pod \"27eee909-a0a0-4040-b765-1c7f0db4beb1\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.568795 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-utilities\") pod \"27eee909-a0a0-4040-b765-1c7f0db4beb1\" (UID: \"27eee909-a0a0-4040-b765-1c7f0db4beb1\") " Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.570036 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-utilities" (OuterVolumeSpecName: "utilities") pod "27eee909-a0a0-4040-b765-1c7f0db4beb1" (UID: "27eee909-a0a0-4040-b765-1c7f0db4beb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.599294 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27eee909-a0a0-4040-b765-1c7f0db4beb1-kube-api-access-968xh" (OuterVolumeSpecName: "kube-api-access-968xh") pod "27eee909-a0a0-4040-b765-1c7f0db4beb1" (UID: "27eee909-a0a0-4040-b765-1c7f0db4beb1"). InnerVolumeSpecName "kube-api-access-968xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.670866 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-968xh\" (UniqueName: \"kubernetes.io/projected/27eee909-a0a0-4040-b765-1c7f0db4beb1-kube-api-access-968xh\") on node \"crc\" DevicePath \"\"" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.670895 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.713884 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27eee909-a0a0-4040-b765-1c7f0db4beb1" (UID: "27eee909-a0a0-4040-b765-1c7f0db4beb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.774081 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27eee909-a0a0-4040-b765-1c7f0db4beb1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.836160 4983 generic.go:334] "Generic (PLEG): container finished" podID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerID="2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c" exitCode=0 Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.836197 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxr5p" event={"ID":"27eee909-a0a0-4040-b765-1c7f0db4beb1","Type":"ContainerDied","Data":"2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c"} Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.836230 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxr5p" event={"ID":"27eee909-a0a0-4040-b765-1c7f0db4beb1","Type":"ContainerDied","Data":"6b0d5c7f6ec9466d8976c0bbe44d53e0476c422ca93ca87be0ef43fdee2ef223"} Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.836201 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxr5p" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.836247 4983 scope.go:117] "RemoveContainer" containerID="2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.863677 4983 scope.go:117] "RemoveContainer" containerID="68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.883648 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxr5p"] Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.898330 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nxr5p"] Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.917945 4983 scope.go:117] "RemoveContainer" containerID="8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.940826 4983 scope.go:117] "RemoveContainer" containerID="2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c" Nov 25 21:36:43 crc kubenswrapper[4983]: E1125 21:36:43.941282 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c\": container with ID starting with 2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c not found: ID does not exist" containerID="2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.941329 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c"} err="failed to get container status \"2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c\": rpc error: code = NotFound desc = could not find container \"2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c\": container with ID starting with 2d5f3f611b6e21a787306e4584618e9d622761d674da06a1582b39305fddd77c not found: ID does not exist" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.941350 4983 scope.go:117] "RemoveContainer" containerID="68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b" Nov 25 21:36:43 crc kubenswrapper[4983]: E1125 21:36:43.941582 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b\": container with ID starting with 68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b not found: ID does not exist" containerID="68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.941618 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b"} err="failed to get container status \"68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b\": rpc error: code = NotFound desc = could not find container \"68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b\": container with ID starting with 68f73c88faa01a1265364c8972d0ab8db9ebe640db5d669681c4c94c7408b83b not found: ID does not exist" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.941636 4983 scope.go:117] "RemoveContainer" containerID="8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e" Nov 25 21:36:43 crc kubenswrapper[4983]: E1125 21:36:43.942184 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e\": container with ID starting with 8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e not found: ID does not exist" containerID="8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e" Nov 25 21:36:43 crc kubenswrapper[4983]: I1125 21:36:43.942204 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e"} err="failed to get container status \"8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e\": rpc error: code = NotFound desc = could not find container \"8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e\": container with ID starting with 8a22d2faf6215317889aa7899e6aaa6db852963b32d61732da44c9a37fd6553e not found: ID does not exist" Nov 25 21:36:45 crc kubenswrapper[4983]: I1125 21:36:45.614720 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" path="/var/lib/kubelet/pods/27eee909-a0a0-4040-b765-1c7f0db4beb1/volumes" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.009600 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/util/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.150570 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/util/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.206574 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/pull/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.218276 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/pull/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.417134 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/pull/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.422756 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/util/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.432641 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5025d178126b71404e608d2bb5b600e24e124e11c09ccc9c402d2044e9q77kp_5df8fe9d-7ee2-4f34-a56d-d7baaa1e4183/extract/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.591619 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-nf6tq_1ec6aefb-824e-4248-ac00-c1d0b526edc6/manager/1.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.598105 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-nf6tq_1ec6aefb-824e-4248-ac00-c1d0b526edc6/kube-rbac-proxy/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.626416 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-nf6tq_1ec6aefb-824e-4248-ac00-c1d0b526edc6/manager/2.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.759104 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-b9lnt_cf765330-a0f9-4603-a92b-4aec8feaeafb/manager/2.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.769173 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-b9lnt_cf765330-a0f9-4603-a92b-4aec8feaeafb/kube-rbac-proxy/0.log" Nov 25 21:36:59 crc kubenswrapper[4983]: I1125 21:36:59.821767 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-b9lnt_cf765330-a0f9-4603-a92b-4aec8feaeafb/manager/1.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.527294 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-lzn84_00a7db78-81a7-481d-a20e-135c60e139e3/kube-rbac-proxy/0.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.568148 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-lzn84_00a7db78-81a7-481d-a20e-135c60e139e3/manager/1.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.575864 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-lzn84_00a7db78-81a7-481d-a20e-135c60e139e3/manager/2.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.710126 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-xvxp7_da827172-6e3a-42a7-814c-cdfcc18d48d6/kube-rbac-proxy/0.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.740139 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-xvxp7_da827172-6e3a-42a7-814c-cdfcc18d48d6/manager/1.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.753759 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-xvxp7_da827172-6e3a-42a7-814c-cdfcc18d48d6/manager/2.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.879031 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-t5knb_48b3567f-5b1a-4f14-891c-775c05e2d768/kube-rbac-proxy/0.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.911106 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-t5knb_48b3567f-5b1a-4f14-891c-775c05e2d768/manager/2.log" Nov 25 21:37:00 crc kubenswrapper[4983]: I1125 21:37:00.935170 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-t5knb_48b3567f-5b1a-4f14-891c-775c05e2d768/manager/1.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.076727 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-cctnq_72f1d28e-26ff-43d3-bd93-54c21d9cdd70/kube-rbac-proxy/0.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.082646 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-cctnq_72f1d28e-26ff-43d3-bd93-54c21d9cdd70/manager/1.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.084939 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-cctnq_72f1d28e-26ff-43d3-bd93-54c21d9cdd70/manager/2.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.267465 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-qlm9k_0d3d657c-e179-43c7-abca-c37f8396d1cd/manager/1.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.269962 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-qlm9k_0d3d657c-e179-43c7-abca-c37f8396d1cd/kube-rbac-proxy/0.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.275378 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-qlm9k_0d3d657c-e179-43c7-abca-c37f8396d1cd/manager/2.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.432791 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-9zpxb_e1668e7f-55bb-415c-b378-1c70483b30a6/kube-rbac-proxy/0.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.432824 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-9zpxb_e1668e7f-55bb-415c-b378-1c70483b30a6/manager/3.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.488044 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-9zpxb_e1668e7f-55bb-415c-b378-1c70483b30a6/manager/2.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.567885 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-fchv4_e5edd26f-9ffb-4be8-86c1-99d32e812816/kube-rbac-proxy/0.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.609322 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-fchv4_e5edd26f-9ffb-4be8-86c1-99d32e812816/manager/2.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.623387 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-fchv4_e5edd26f-9ffb-4be8-86c1-99d32e812816/manager/1.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.751449 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-f8bh4_2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98/kube-rbac-proxy/0.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.761978 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-f8bh4_2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98/manager/2.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.775104 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-f8bh4_2bb3e4e5-dd92-4f7d-b69a-b807d19a9e98/manager/1.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.922891 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_afff7723-36e3-42ae-9fac-9f8fdb86d839/kube-rbac-proxy/0.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.945128 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_afff7723-36e3-42ae-9fac-9f8fdb86d839/manager/1.log" Nov 25 21:37:01 crc kubenswrapper[4983]: I1125 21:37:01.953831 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-rwkrr_afff7723-36e3-42ae-9fac-9f8fdb86d839/manager/2.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.003834 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-ljpb8_badb10c7-4c8c-42c4-b481-221377fa7255/kube-rbac-proxy/0.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.102762 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-ljpb8_badb10c7-4c8c-42c4-b481-221377fa7255/manager/2.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.130319 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-ljpb8_badb10c7-4c8c-42c4-b481-221377fa7255/manager/1.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.168772 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-dj7nt_9d7c78e4-4890-4527-9db4-131842750615/kube-rbac-proxy/0.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.192625 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-dj7nt_9d7c78e4-4890-4527-9db4-131842750615/manager/2.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.331905 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-dj7nt_9d7c78e4-4890-4527-9db4-131842750615/manager/1.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.333914 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-p8q9g_a096f840-35b3-48c1-8c0e-762b67b8bde0/kube-rbac-proxy/0.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.361032 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-p8q9g_a096f840-35b3-48c1-8c0e-762b67b8bde0/manager/3.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.401856 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-p8q9g_a096f840-35b3-48c1-8c0e-762b67b8bde0/manager/2.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.521613 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg_4743af06-44e2-438a-82b7-bf32b0f5ca03/kube-rbac-proxy/0.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.553406 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg_4743af06-44e2-438a-82b7-bf32b0f5ca03/manager/0.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.592125 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6bj24kg_4743af06-44e2-438a-82b7-bf32b0f5ca03/manager/1.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.744609 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cf7cd9d4-bwfnd_f32095da-1fdc-4d52-b082-98b39652cdc6/manager/1.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.804107 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8dd87645-g89th_668ad5ef-ec7f-4239-94c5-8bb868f653ce/operator/1.log" Nov 25 21:37:02 crc kubenswrapper[4983]: I1125 21:37:02.952156 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-stqz2_2a6b637c-d929-42fa-89c6-8e5af3746cc1/registry-server/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.001761 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5cf7cd9d4-bwfnd_f32095da-1fdc-4d52-b082-98b39652cdc6/manager/2.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.016280 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8dd87645-g89th_668ad5ef-ec7f-4239-94c5-8bb868f653ce/operator/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.106199 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-zc5rq_d7302bdd-d74f-4d95-a354-42fcd52bf22e/kube-rbac-proxy/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.152524 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-zc5rq_d7302bdd-d74f-4d95-a354-42fcd52bf22e/manager/2.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.190665 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-zc5rq_d7302bdd-d74f-4d95-a354-42fcd52bf22e/manager/1.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.191009 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-mhjtj_64141c1d-799a-4d72-aa99-e54975052879/kube-rbac-proxy/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.284026 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-mhjtj_64141c1d-799a-4d72-aa99-e54975052879/manager/2.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.315845 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-mhjtj_64141c1d-799a-4d72-aa99-e54975052879/manager/1.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.388068 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bwf7d_ff284fea-7792-40e1-8ede-f52412a6c014/operator/2.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.390394 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-bwf7d_ff284fea-7792-40e1-8ede-f52412a6c014/operator/1.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.482147 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-4c95t_5b14316c-9639-4934-a5e9-5381d2797ef5/kube-rbac-proxy/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.555969 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-4c95t_5b14316c-9639-4934-a5e9-5381d2797ef5/manager/2.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.556918 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-4c95t_5b14316c-9639-4934-a5e9-5381d2797ef5/manager/1.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.729894 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b7bb74d9f-m9bbx_92f1d8fa-69cf-49c3-a616-82a185ff8dd5/kube-rbac-proxy/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.789355 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b7bb74d9f-m9bbx_92f1d8fa-69cf-49c3-a616-82a185ff8dd5/manager/2.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.828767 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b7bb74d9f-m9bbx_92f1d8fa-69cf-49c3-a616-82a185ff8dd5/manager/1.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.948020 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-lr7wt_ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042/kube-rbac-proxy/0.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.958364 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-lr7wt_ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042/manager/1.log" Nov 25 21:37:03 crc kubenswrapper[4983]: I1125 21:37:03.980600 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-lr7wt_ca7c2bed-d9e1-4eb9-b50e-fee1d2eac042/manager/0.log" Nov 25 21:37:04 crc kubenswrapper[4983]: I1125 21:37:04.058931 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-rpfhz_1e439ca1-98f3-4650-96da-1e4c1b2da37e/kube-rbac-proxy/0.log" Nov 25 21:37:04 crc kubenswrapper[4983]: I1125 21:37:04.113358 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-rpfhz_1e439ca1-98f3-4650-96da-1e4c1b2da37e/manager/2.log" Nov 25 21:37:04 crc kubenswrapper[4983]: I1125 21:37:04.132787 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-rpfhz_1e439ca1-98f3-4650-96da-1e4c1b2da37e/manager/1.log" Nov 25 21:37:23 crc kubenswrapper[4983]: I1125 21:37:23.889641 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d2mch_ec800216-1c1b-4324-a1be-2a0c5dcc6ce5/control-plane-machine-set-operator/0.log" Nov 25 21:37:24 crc kubenswrapper[4983]: I1125 21:37:24.060938 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ztngk_aed03db9-cd2b-4aa5-96d4-de0e00e95842/kube-rbac-proxy/0.log" Nov 25 21:37:24 crc kubenswrapper[4983]: I1125 21:37:24.081482 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-ztngk_aed03db9-cd2b-4aa5-96d4-de0e00e95842/machine-api-operator/0.log" Nov 25 21:37:37 crc kubenswrapper[4983]: I1125 21:37:37.644899 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wgvpx_3603f9e9-5a0e-4283-86a8-4fa4a2b1d98a/cert-manager-controller/0.log" Nov 25 21:37:37 crc kubenswrapper[4983]: I1125 21:37:37.724788 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vgj58_a84e28f5-6c16-49c9-aaee-2e1ba4b547a3/cert-manager-cainjector/0.log" Nov 25 21:37:37 crc kubenswrapper[4983]: I1125 21:37:37.857018 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-5xb5k_fea16ade-51b4-491b-acb2-4a3d5974bf0c/cert-manager-webhook/0.log" Nov 25 21:37:51 crc kubenswrapper[4983]: I1125 21:37:51.751354 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-fkqdw_cdedcf78-6faf-457d-8817-2d87dc07b913/nmstate-console-plugin/0.log" Nov 25 21:37:51 crc kubenswrapper[4983]: I1125 21:37:51.936101 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q89j2_ab060c60-6a98-4358-a028-e3600d0239f4/nmstate-handler/0.log" Nov 25 21:37:51 crc kubenswrapper[4983]: I1125 21:37:51.987854 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-gz8k6_1064f79e-2d97-4733-a2f2-f5f96b204825/kube-rbac-proxy/0.log" Nov 25 21:37:52 crc kubenswrapper[4983]: I1125 21:37:52.022855 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-gz8k6_1064f79e-2d97-4733-a2f2-f5f96b204825/nmstate-metrics/0.log" Nov 25 21:37:52 crc kubenswrapper[4983]: I1125 21:37:52.088201 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-5tmqq_fdab2d13-eec3-468a-b383-e6bc7e00849f/nmstate-operator/0.log" Nov 25 21:37:52 crc kubenswrapper[4983]: I1125 21:37:52.206613 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-7kqbd_ce76eb4b-37f0-4067-a4d2-34a1b8e0b6a4/nmstate-webhook/0.log" Nov 25 21:38:07 crc kubenswrapper[4983]: I1125 21:38:07.647716 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-rfq8m_d63bc930-d0b8-4b74-924f-def9a4c05193/kube-rbac-proxy/0.log" Nov 25 21:38:07 crc kubenswrapper[4983]: I1125 21:38:07.848643 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:38:07 crc kubenswrapper[4983]: I1125 21:38:07.889906 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-rfq8m_d63bc930-d0b8-4b74-924f-def9a4c05193/controller/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.080366 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.117847 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.143592 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.170320 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.293453 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.305178 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.310963 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.377258 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.529319 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-reloader/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.555428 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-frr-files/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.579405 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/controller/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.582218 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/cp-metrics/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.778290 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/frr-metrics/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.833935 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/kube-rbac-proxy/0.log" Nov 25 21:38:08 crc kubenswrapper[4983]: I1125 21:38:08.866505 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/kube-rbac-proxy-frr/0.log" Nov 25 21:38:09 crc kubenswrapper[4983]: I1125 21:38:09.014315 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/reloader/0.log" Nov 25 21:38:09 crc kubenswrapper[4983]: I1125 21:38:09.078802 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-6sz7r_4cb9ac50-997a-4361-be38-99a645916356/frr-k8s-webhook-server/0.log" Nov 25 21:38:09 crc kubenswrapper[4983]: I1125 21:38:09.210189 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6dcc87d69d-p8fwj_74baeb7c-21f0-4d1c-9a61-7694f59cc161/manager/3.log" Nov 25 21:38:09 crc kubenswrapper[4983]: I1125 21:38:09.375836 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6dcc87d69d-p8fwj_74baeb7c-21f0-4d1c-9a61-7694f59cc161/manager/2.log" Nov 25 21:38:09 crc kubenswrapper[4983]: I1125 21:38:09.450670 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fdb8c798-tkp7s_668e90f8-b352-4ad3-8965-1394ac68bf45/webhook-server/0.log" Nov 25 21:38:09 crc kubenswrapper[4983]: I1125 21:38:09.627703 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q2pnt_b17632af-d63d-48e6-bbc3-e4056a403b94/kube-rbac-proxy/0.log" Nov 25 21:38:10 crc kubenswrapper[4983]: I1125 21:38:10.149387 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q2pnt_b17632af-d63d-48e6-bbc3-e4056a403b94/speaker/0.log" Nov 25 21:38:10 crc kubenswrapper[4983]: I1125 21:38:10.280436 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nkz74_94f67f46-ba33-4e52-a4f7-dabfa0e919c8/frr/0.log" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.237879 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wvmb9"] Nov 25 21:38:21 crc kubenswrapper[4983]: E1125 21:38:21.240056 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="extract-utilities" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.240138 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="extract-utilities" Nov 25 21:38:21 crc kubenswrapper[4983]: E1125 21:38:21.240232 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="extract-content" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.240251 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="extract-content" Nov 25 21:38:21 crc kubenswrapper[4983]: E1125 21:38:21.240330 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="registry-server" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.240349 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="registry-server" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.241106 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="27eee909-a0a0-4040-b765-1c7f0db4beb1" containerName="registry-server" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.245624 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.260856 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvmb9"] Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.274821 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-utilities\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.274891 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-catalog-content\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.275093 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/face300a-7980-4c59-a99f-d5ea5d2ad81e-kube-api-access-62lq6\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.376942 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/face300a-7980-4c59-a99f-d5ea5d2ad81e-kube-api-access-62lq6\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.377347 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-utilities\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.377381 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-catalog-content\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.377849 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-catalog-content\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.378359 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-utilities\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.397012 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/face300a-7980-4c59-a99f-d5ea5d2ad81e-kube-api-access-62lq6\") pod \"redhat-operators-wvmb9\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:21 crc kubenswrapper[4983]: I1125 21:38:21.597540 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:22 crc kubenswrapper[4983]: I1125 21:38:22.118884 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wvmb9"] Nov 25 21:38:22 crc kubenswrapper[4983]: I1125 21:38:22.877739 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerStarted","Data":"978d8f895e32e71cf34b26490b45dd6012f8c73044aa207e3fa851fa642b907c"} Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.281591 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/util/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.441643 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/pull/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.456358 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/util/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.474404 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/pull/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.591742 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/util/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.656467 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/extract/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.714025 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehp7hd_c9f48e6f-bd8d-4373-a680-4bf6a3ac8728/pull/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.783409 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-utilities/0.log" Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.887082 4983 generic.go:334] "Generic (PLEG): container finished" podID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerID="32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161" exitCode=0 Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.887256 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerDied","Data":"32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161"} Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.889579 4983 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 21:38:23 crc kubenswrapper[4983]: I1125 21:38:23.997967 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-content/0.log" Nov 25 21:38:24 crc kubenswrapper[4983]: I1125 21:38:24.007495 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-content/0.log" Nov 25 21:38:24 crc kubenswrapper[4983]: I1125 21:38:24.015189 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-utilities/0.log" Nov 25 21:38:24 crc kubenswrapper[4983]: I1125 21:38:24.177902 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-content/0.log" Nov 25 21:38:24 crc kubenswrapper[4983]: I1125 21:38:24.178613 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/extract-utilities/0.log" Nov 25 21:38:24 crc kubenswrapper[4983]: I1125 21:38:24.584274 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7khpl_21634d0b-fbfc-409b-9ab9-9590fc78e410/registry-server/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.198847 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-utilities/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.411784 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-content/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.431076 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-content/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.435092 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-utilities/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.582042 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-content/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.627255 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/extract-utilities/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.801670 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/util/0.log" Nov 25 21:38:25 crc kubenswrapper[4983]: I1125 21:38:25.914496 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerStarted","Data":"ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7"} Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.001107 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/pull/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.043074 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/pull/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.131636 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/util/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.271439 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/util/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.367316 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/extract/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.398384 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-k2fp5_b97fc113-da49-4a64-a324-d63d1f29f028/registry-server/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.464018 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6c894z_bcfc0074-c596-4345-9cd1-caada40895be/pull/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.505089 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kh7rb_168ec053-d5d4-4ebc-956d-429c0d2ff5fb/marketplace-operator/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.578361 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-utilities/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.752474 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-utilities/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.789021 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-content/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.798906 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-content/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.921234 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-content/0.log" Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.923826 4983 generic.go:334] "Generic (PLEG): container finished" podID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerID="ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7" exitCode=0 Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.923869 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerDied","Data":"ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7"} Nov 25 21:38:26 crc kubenswrapper[4983]: I1125 21:38:26.963845 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.004896 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.201518 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-content/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.220567 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fvs44_b702438d-1a03-4bb1-9daf-3425f03a6f75/registry-server/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.221895 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.228956 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-content/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.397190 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-content/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.445640 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.480979 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wvmb9_face300a-7980-4c59-a99f-d5ea5d2ad81e/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.678366 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wvmb9_face300a-7980-4c59-a99f-d5ea5d2ad81e/extract-content/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.716095 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wvmb9_face300a-7980-4c59-a99f-d5ea5d2ad81e/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.764379 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wvmb9_face300a-7980-4c59-a99f-d5ea5d2ad81e/extract-content/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.837967 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hnbjx_38faea82-52be-43bf-8cea-8144ef0bd8d5/registry-server/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.934706 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerStarted","Data":"7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c"} Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.943446 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wvmb9_face300a-7980-4c59-a99f-d5ea5d2ad81e/extract-utilities/0.log" Nov 25 21:38:27 crc kubenswrapper[4983]: I1125 21:38:27.972883 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wvmb9_face300a-7980-4c59-a99f-d5ea5d2ad81e/extract-content/0.log" Nov 25 21:38:31 crc kubenswrapper[4983]: I1125 21:38:31.598021 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:31 crc kubenswrapper[4983]: I1125 21:38:31.598566 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:32 crc kubenswrapper[4983]: I1125 21:38:32.672680 4983 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wvmb9" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="registry-server" probeResult="failure" output=< Nov 25 21:38:32 crc kubenswrapper[4983]: timeout: failed to connect service ":50051" within 1s Nov 25 21:38:32 crc kubenswrapper[4983]: > Nov 25 21:38:39 crc kubenswrapper[4983]: I1125 21:38:39.927347 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:38:39 crc kubenswrapper[4983]: I1125 21:38:39.927972 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:38:41 crc kubenswrapper[4983]: I1125 21:38:41.645721 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:41 crc kubenswrapper[4983]: I1125 21:38:41.662520 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wvmb9" podStartSLOduration=17.230760836 podStartE2EDuration="20.662505787s" podCreationTimestamp="2025-11-25 21:38:21 +0000 UTC" firstStartedPulling="2025-11-25 21:38:23.889385307 +0000 UTC m=+4285.001918699" lastFinishedPulling="2025-11-25 21:38:27.321130258 +0000 UTC m=+4288.433663650" observedRunningTime="2025-11-25 21:38:28.965176104 +0000 UTC m=+4290.077709496" watchObservedRunningTime="2025-11-25 21:38:41.662505787 +0000 UTC m=+4302.775039179" Nov 25 21:38:41 crc kubenswrapper[4983]: I1125 21:38:41.692460 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.055081 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvmb9"] Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.091370 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wvmb9" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="registry-server" containerID="cri-o://7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c" gracePeriod=2 Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.616836 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.741797 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-utilities\") pod \"face300a-7980-4c59-a99f-d5ea5d2ad81e\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.742086 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-catalog-content\") pod \"face300a-7980-4c59-a99f-d5ea5d2ad81e\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.742220 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/face300a-7980-4c59-a99f-d5ea5d2ad81e-kube-api-access-62lq6\") pod \"face300a-7980-4c59-a99f-d5ea5d2ad81e\" (UID: \"face300a-7980-4c59-a99f-d5ea5d2ad81e\") " Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.742798 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-utilities" (OuterVolumeSpecName: "utilities") pod "face300a-7980-4c59-a99f-d5ea5d2ad81e" (UID: "face300a-7980-4c59-a99f-d5ea5d2ad81e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.825290 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "face300a-7980-4c59-a99f-d5ea5d2ad81e" (UID: "face300a-7980-4c59-a99f-d5ea5d2ad81e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.844412 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:38:43 crc kubenswrapper[4983]: I1125 21:38:43.844447 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/face300a-7980-4c59-a99f-d5ea5d2ad81e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.129353 4983 generic.go:334] "Generic (PLEG): container finished" podID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerID="7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c" exitCode=0 Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.129416 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerDied","Data":"7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c"} Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.130733 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wvmb9" event={"ID":"face300a-7980-4c59-a99f-d5ea5d2ad81e","Type":"ContainerDied","Data":"978d8f895e32e71cf34b26490b45dd6012f8c73044aa207e3fa851fa642b907c"} Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.129452 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wvmb9" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.130799 4983 scope.go:117] "RemoveContainer" containerID="7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.162689 4983 scope.go:117] "RemoveContainer" containerID="ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.482945 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/face300a-7980-4c59-a99f-d5ea5d2ad81e-kube-api-access-62lq6" (OuterVolumeSpecName: "kube-api-access-62lq6") pod "face300a-7980-4c59-a99f-d5ea5d2ad81e" (UID: "face300a-7980-4c59-a99f-d5ea5d2ad81e"). InnerVolumeSpecName "kube-api-access-62lq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.503762 4983 scope.go:117] "RemoveContainer" containerID="32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.562061 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62lq6\" (UniqueName: \"kubernetes.io/projected/face300a-7980-4c59-a99f-d5ea5d2ad81e-kube-api-access-62lq6\") on node \"crc\" DevicePath \"\"" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.612965 4983 scope.go:117] "RemoveContainer" containerID="7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c" Nov 25 21:38:44 crc kubenswrapper[4983]: E1125 21:38:44.613678 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c\": container with ID starting with 7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c not found: ID does not exist" containerID="7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.613730 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c"} err="failed to get container status \"7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c\": rpc error: code = NotFound desc = could not find container \"7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c\": container with ID starting with 7a5587ea8ec478fde071908963397a77d89a88c85a9e96377e6fd38e50b7223c not found: ID does not exist" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.613763 4983 scope.go:117] "RemoveContainer" containerID="ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7" Nov 25 21:38:44 crc kubenswrapper[4983]: E1125 21:38:44.614447 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7\": container with ID starting with ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7 not found: ID does not exist" containerID="ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.614623 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7"} err="failed to get container status \"ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7\": rpc error: code = NotFound desc = could not find container \"ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7\": container with ID starting with ccc0ba8e947006d71e3b3bc98b0097eac9e290e0f5af61d1ea0f4b4960aa27c7 not found: ID does not exist" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.614704 4983 scope.go:117] "RemoveContainer" containerID="32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161" Nov 25 21:38:44 crc kubenswrapper[4983]: E1125 21:38:44.620959 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161\": container with ID starting with 32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161 not found: ID does not exist" containerID="32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.620998 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161"} err="failed to get container status \"32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161\": rpc error: code = NotFound desc = could not find container \"32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161\": container with ID starting with 32735cadada7fec93be015cd868582187fe02d764cb7d292fe678e49c1442161 not found: ID does not exist" Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.793813 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wvmb9"] Nov 25 21:38:44 crc kubenswrapper[4983]: I1125 21:38:44.803034 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wvmb9"] Nov 25 21:38:45 crc kubenswrapper[4983]: I1125 21:38:45.621661 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" path="/var/lib/kubelet/pods/face300a-7980-4c59-a99f-d5ea5d2ad81e/volumes" Nov 25 21:39:09 crc kubenswrapper[4983]: I1125 21:39:09.927655 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:39:09 crc kubenswrapper[4983]: I1125 21:39:09.928277 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:39:39 crc kubenswrapper[4983]: I1125 21:39:39.927282 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:39:39 crc kubenswrapper[4983]: I1125 21:39:39.927894 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:39:39 crc kubenswrapper[4983]: I1125 21:39:39.927947 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:39:39 crc kubenswrapper[4983]: I1125 21:39:39.928680 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5e935aa4c18062ea9c0850830cdd7bce9f90f4526f93c77397efbe4e20c1833"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:39:39 crc kubenswrapper[4983]: I1125 21:39:39.928732 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://f5e935aa4c18062ea9c0850830cdd7bce9f90f4526f93c77397efbe4e20c1833" gracePeriod=600 Nov 25 21:39:40 crc kubenswrapper[4983]: I1125 21:39:40.808490 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="f5e935aa4c18062ea9c0850830cdd7bce9f90f4526f93c77397efbe4e20c1833" exitCode=0 Nov 25 21:39:40 crc kubenswrapper[4983]: I1125 21:39:40.809264 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"f5e935aa4c18062ea9c0850830cdd7bce9f90f4526f93c77397efbe4e20c1833"} Nov 25 21:39:40 crc kubenswrapper[4983]: I1125 21:39:40.809307 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerStarted","Data":"5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182"} Nov 25 21:39:40 crc kubenswrapper[4983]: I1125 21:39:40.809327 4983 scope.go:117] "RemoveContainer" containerID="efc61dfe07719f0c501d380cc6fa281e53ba116cdce8c64a9fd1d598ec140a6f" Nov 25 21:40:09 crc kubenswrapper[4983]: I1125 21:40:09.108265 4983 generic.go:334] "Generic (PLEG): container finished" podID="ad096a12-9a47-4707-91be-37bcfec628b0" containerID="402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8" exitCode=0 Nov 25 21:40:09 crc kubenswrapper[4983]: I1125 21:40:09.108346 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pljc7/must-gather-zbtz6" event={"ID":"ad096a12-9a47-4707-91be-37bcfec628b0","Type":"ContainerDied","Data":"402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8"} Nov 25 21:40:09 crc kubenswrapper[4983]: I1125 21:40:09.109346 4983 scope.go:117] "RemoveContainer" containerID="402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8" Nov 25 21:40:09 crc kubenswrapper[4983]: I1125 21:40:09.815140 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pljc7_must-gather-zbtz6_ad096a12-9a47-4707-91be-37bcfec628b0/gather/0.log" Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.129671 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pljc7/must-gather-zbtz6"] Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.130750 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pljc7/must-gather-zbtz6" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="copy" containerID="cri-o://750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb" gracePeriod=2 Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.148084 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pljc7/must-gather-zbtz6"] Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.652126 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pljc7_must-gather-zbtz6_ad096a12-9a47-4707-91be-37bcfec628b0/copy/0.log" Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.652858 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.728467 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5fw6\" (UniqueName: \"kubernetes.io/projected/ad096a12-9a47-4707-91be-37bcfec628b0-kube-api-access-h5fw6\") pod \"ad096a12-9a47-4707-91be-37bcfec628b0\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.728637 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad096a12-9a47-4707-91be-37bcfec628b0-must-gather-output\") pod \"ad096a12-9a47-4707-91be-37bcfec628b0\" (UID: \"ad096a12-9a47-4707-91be-37bcfec628b0\") " Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.736812 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad096a12-9a47-4707-91be-37bcfec628b0-kube-api-access-h5fw6" (OuterVolumeSpecName: "kube-api-access-h5fw6") pod "ad096a12-9a47-4707-91be-37bcfec628b0" (UID: "ad096a12-9a47-4707-91be-37bcfec628b0"). InnerVolumeSpecName "kube-api-access-h5fw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.830967 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5fw6\" (UniqueName: \"kubernetes.io/projected/ad096a12-9a47-4707-91be-37bcfec628b0-kube-api-access-h5fw6\") on node \"crc\" DevicePath \"\"" Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.864237 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad096a12-9a47-4707-91be-37bcfec628b0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ad096a12-9a47-4707-91be-37bcfec628b0" (UID: "ad096a12-9a47-4707-91be-37bcfec628b0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:40:20 crc kubenswrapper[4983]: I1125 21:40:20.933150 4983 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad096a12-9a47-4707-91be-37bcfec628b0-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.235873 4983 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pljc7_must-gather-zbtz6_ad096a12-9a47-4707-91be-37bcfec628b0/copy/0.log" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.236295 4983 generic.go:334] "Generic (PLEG): container finished" podID="ad096a12-9a47-4707-91be-37bcfec628b0" containerID="750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb" exitCode=143 Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.236349 4983 scope.go:117] "RemoveContainer" containerID="750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.236354 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pljc7/must-gather-zbtz6" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.257831 4983 scope.go:117] "RemoveContainer" containerID="402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.334684 4983 scope.go:117] "RemoveContainer" containerID="750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb" Nov 25 21:40:21 crc kubenswrapper[4983]: E1125 21:40:21.335062 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb\": container with ID starting with 750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb not found: ID does not exist" containerID="750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.335105 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb"} err="failed to get container status \"750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb\": rpc error: code = NotFound desc = could not find container \"750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb\": container with ID starting with 750ccdc45912b1fc9f3f6bf74448ed24dffc6e81203a80446ef70903b6b5f9bb not found: ID does not exist" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.335131 4983 scope.go:117] "RemoveContainer" containerID="402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8" Nov 25 21:40:21 crc kubenswrapper[4983]: E1125 21:40:21.335414 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8\": container with ID starting with 402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8 not found: ID does not exist" containerID="402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.335445 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8"} err="failed to get container status \"402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8\": rpc error: code = NotFound desc = could not find container \"402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8\": container with ID starting with 402d1cec812f5dfdd865eb1069c458a00a5e93ec35cd150869a9e9458f2017f8 not found: ID does not exist" Nov 25 21:40:21 crc kubenswrapper[4983]: I1125 21:40:21.618069 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" path="/var/lib/kubelet/pods/ad096a12-9a47-4707-91be-37bcfec628b0/volumes" Nov 25 21:42:09 crc kubenswrapper[4983]: I1125 21:42:09.928132 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:42:09 crc kubenswrapper[4983]: I1125 21:42:09.928684 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.570086 4983 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-52pcq"] Nov 25 21:42:12 crc kubenswrapper[4983]: E1125 21:42:12.571086 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="gather" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571114 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="gather" Nov 25 21:42:12 crc kubenswrapper[4983]: E1125 21:42:12.571143 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="copy" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571154 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="copy" Nov 25 21:42:12 crc kubenswrapper[4983]: E1125 21:42:12.571169 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="extract-content" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571180 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="extract-content" Nov 25 21:42:12 crc kubenswrapper[4983]: E1125 21:42:12.571218 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="registry-server" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571229 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="registry-server" Nov 25 21:42:12 crc kubenswrapper[4983]: E1125 21:42:12.571251 4983 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="extract-utilities" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571263 4983 state_mem.go:107] "Deleted CPUSet assignment" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="extract-utilities" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571592 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="copy" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571623 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad096a12-9a47-4707-91be-37bcfec628b0" containerName="gather" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.571673 4983 memory_manager.go:354] "RemoveStaleState removing state" podUID="face300a-7980-4c59-a99f-d5ea5d2ad81e" containerName="registry-server" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.573934 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.598779 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52pcq"] Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.672862 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-utilities\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.672937 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpz8k\" (UniqueName: \"kubernetes.io/projected/fc557697-cd2e-48c5-aa06-19f10e04b555-kube-api-access-tpz8k\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.673056 4983 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-catalog-content\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.775422 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-catalog-content\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.775537 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-utilities\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.775600 4983 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpz8k\" (UniqueName: \"kubernetes.io/projected/fc557697-cd2e-48c5-aa06-19f10e04b555-kube-api-access-tpz8k\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.775978 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-catalog-content\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.776058 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-utilities\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.797101 4983 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpz8k\" (UniqueName: \"kubernetes.io/projected/fc557697-cd2e-48c5-aa06-19f10e04b555-kube-api-access-tpz8k\") pod \"redhat-marketplace-52pcq\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:12 crc kubenswrapper[4983]: I1125 21:42:12.906909 4983 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:13 crc kubenswrapper[4983]: I1125 21:42:13.365793 4983 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-52pcq"] Nov 25 21:42:13 crc kubenswrapper[4983]: I1125 21:42:13.646415 4983 generic.go:334] "Generic (PLEG): container finished" podID="fc557697-cd2e-48c5-aa06-19f10e04b555" containerID="814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1" exitCode=0 Nov 25 21:42:13 crc kubenswrapper[4983]: I1125 21:42:13.646483 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52pcq" event={"ID":"fc557697-cd2e-48c5-aa06-19f10e04b555","Type":"ContainerDied","Data":"814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1"} Nov 25 21:42:13 crc kubenswrapper[4983]: I1125 21:42:13.646529 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52pcq" event={"ID":"fc557697-cd2e-48c5-aa06-19f10e04b555","Type":"ContainerStarted","Data":"f1cf9bb4ae9b36379b86f3ff13430dd6598c9d823bae7485b2d70aedf5f1ee51"} Nov 25 21:42:14 crc kubenswrapper[4983]: I1125 21:42:14.660314 4983 generic.go:334] "Generic (PLEG): container finished" podID="fc557697-cd2e-48c5-aa06-19f10e04b555" containerID="03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d" exitCode=0 Nov 25 21:42:14 crc kubenswrapper[4983]: I1125 21:42:14.660403 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52pcq" event={"ID":"fc557697-cd2e-48c5-aa06-19f10e04b555","Type":"ContainerDied","Data":"03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d"} Nov 25 21:42:15 crc kubenswrapper[4983]: I1125 21:42:15.672366 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52pcq" event={"ID":"fc557697-cd2e-48c5-aa06-19f10e04b555","Type":"ContainerStarted","Data":"a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322"} Nov 25 21:42:15 crc kubenswrapper[4983]: I1125 21:42:15.695874 4983 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-52pcq" podStartSLOduration=2.278967353 podStartE2EDuration="3.695853775s" podCreationTimestamp="2025-11-25 21:42:12 +0000 UTC" firstStartedPulling="2025-11-25 21:42:13.649298965 +0000 UTC m=+4514.761832397" lastFinishedPulling="2025-11-25 21:42:15.066185427 +0000 UTC m=+4516.178718819" observedRunningTime="2025-11-25 21:42:15.689072665 +0000 UTC m=+4516.801606057" watchObservedRunningTime="2025-11-25 21:42:15.695853775 +0000 UTC m=+4516.808387167" Nov 25 21:42:22 crc kubenswrapper[4983]: I1125 21:42:22.907441 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:22 crc kubenswrapper[4983]: I1125 21:42:22.907986 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:22 crc kubenswrapper[4983]: I1125 21:42:22.963904 4983 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:23 crc kubenswrapper[4983]: I1125 21:42:23.804424 4983 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:24 crc kubenswrapper[4983]: I1125 21:42:24.947212 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52pcq"] Nov 25 21:42:25 crc kubenswrapper[4983]: I1125 21:42:25.768984 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-52pcq" podUID="fc557697-cd2e-48c5-aa06-19f10e04b555" containerName="registry-server" containerID="cri-o://a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322" gracePeriod=2 Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.277166 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.444388 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-catalog-content\") pod \"fc557697-cd2e-48c5-aa06-19f10e04b555\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.444501 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpz8k\" (UniqueName: \"kubernetes.io/projected/fc557697-cd2e-48c5-aa06-19f10e04b555-kube-api-access-tpz8k\") pod \"fc557697-cd2e-48c5-aa06-19f10e04b555\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.444543 4983 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-utilities\") pod \"fc557697-cd2e-48c5-aa06-19f10e04b555\" (UID: \"fc557697-cd2e-48c5-aa06-19f10e04b555\") " Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.445822 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-utilities" (OuterVolumeSpecName: "utilities") pod "fc557697-cd2e-48c5-aa06-19f10e04b555" (UID: "fc557697-cd2e-48c5-aa06-19f10e04b555"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.451663 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc557697-cd2e-48c5-aa06-19f10e04b555-kube-api-access-tpz8k" (OuterVolumeSpecName: "kube-api-access-tpz8k") pod "fc557697-cd2e-48c5-aa06-19f10e04b555" (UID: "fc557697-cd2e-48c5-aa06-19f10e04b555"). InnerVolumeSpecName "kube-api-access-tpz8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.461770 4983 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc557697-cd2e-48c5-aa06-19f10e04b555" (UID: "fc557697-cd2e-48c5-aa06-19f10e04b555"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.547314 4983 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.547348 4983 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpz8k\" (UniqueName: \"kubernetes.io/projected/fc557697-cd2e-48c5-aa06-19f10e04b555-kube-api-access-tpz8k\") on node \"crc\" DevicePath \"\"" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.547360 4983 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc557697-cd2e-48c5-aa06-19f10e04b555-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.780234 4983 generic.go:334] "Generic (PLEG): container finished" podID="fc557697-cd2e-48c5-aa06-19f10e04b555" containerID="a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322" exitCode=0 Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.780282 4983 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-52pcq" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.780280 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52pcq" event={"ID":"fc557697-cd2e-48c5-aa06-19f10e04b555","Type":"ContainerDied","Data":"a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322"} Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.780405 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-52pcq" event={"ID":"fc557697-cd2e-48c5-aa06-19f10e04b555","Type":"ContainerDied","Data":"f1cf9bb4ae9b36379b86f3ff13430dd6598c9d823bae7485b2d70aedf5f1ee51"} Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.780432 4983 scope.go:117] "RemoveContainer" containerID="a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.807164 4983 scope.go:117] "RemoveContainer" containerID="03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d" Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.823216 4983 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-52pcq"] Nov 25 21:42:26 crc kubenswrapper[4983]: I1125 21:42:26.832745 4983 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-52pcq"] Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.206535 4983 scope.go:117] "RemoveContainer" containerID="814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.241151 4983 scope.go:117] "RemoveContainer" containerID="a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322" Nov 25 21:42:27 crc kubenswrapper[4983]: E1125 21:42:27.241805 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322\": container with ID starting with a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322 not found: ID does not exist" containerID="a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.241844 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322"} err="failed to get container status \"a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322\": rpc error: code = NotFound desc = could not find container \"a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322\": container with ID starting with a23e749245c799f46c9ab08e4c428ac51904daeb3d61c20ba87cbb7bb90ee322 not found: ID does not exist" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.241868 4983 scope.go:117] "RemoveContainer" containerID="03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d" Nov 25 21:42:27 crc kubenswrapper[4983]: E1125 21:42:27.242200 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d\": container with ID starting with 03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d not found: ID does not exist" containerID="03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.242224 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d"} err="failed to get container status \"03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d\": rpc error: code = NotFound desc = could not find container \"03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d\": container with ID starting with 03ddc006ce74b376e009e6d48684afcc1643ab1acdc74ad96e4634e345529d3d not found: ID does not exist" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.242292 4983 scope.go:117] "RemoveContainer" containerID="814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1" Nov 25 21:42:27 crc kubenswrapper[4983]: E1125 21:42:27.242706 4983 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1\": container with ID starting with 814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1 not found: ID does not exist" containerID="814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.242754 4983 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1"} err="failed to get container status \"814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1\": rpc error: code = NotFound desc = could not find container \"814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1\": container with ID starting with 814bdcefb261ba13e5ff22bdde613f695715f3def975ee1cc5463ade56d33ed1 not found: ID does not exist" Nov 25 21:42:27 crc kubenswrapper[4983]: I1125 21:42:27.619054 4983 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc557697-cd2e-48c5-aa06-19f10e04b555" path="/var/lib/kubelet/pods/fc557697-cd2e-48c5-aa06-19f10e04b555/volumes" Nov 25 21:42:39 crc kubenswrapper[4983]: I1125 21:42:39.927252 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:42:39 crc kubenswrapper[4983]: I1125 21:42:39.927770 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:43:09 crc kubenswrapper[4983]: I1125 21:43:09.927929 4983 patch_prober.go:28] interesting pod/machine-config-daemon-fqvg7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 21:43:09 crc kubenswrapper[4983]: I1125 21:43:09.928504 4983 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 21:43:09 crc kubenswrapper[4983]: I1125 21:43:09.928598 4983 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" Nov 25 21:43:09 crc kubenswrapper[4983]: I1125 21:43:09.929360 4983 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182"} pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 21:43:09 crc kubenswrapper[4983]: I1125 21:43:09.929438 4983 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" containerName="machine-config-daemon" containerID="cri-o://5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182" gracePeriod=600 Nov 25 21:43:10 crc kubenswrapper[4983]: E1125 21:43:10.059262 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:43:10 crc kubenswrapper[4983]: I1125 21:43:10.320312 4983 generic.go:334] "Generic (PLEG): container finished" podID="373cf631-46b3-49f3-af97-be8271ce5150" containerID="5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182" exitCode=0 Nov 25 21:43:10 crc kubenswrapper[4983]: I1125 21:43:10.320366 4983 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" event={"ID":"373cf631-46b3-49f3-af97-be8271ce5150","Type":"ContainerDied","Data":"5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182"} Nov 25 21:43:10 crc kubenswrapper[4983]: I1125 21:43:10.320412 4983 scope.go:117] "RemoveContainer" containerID="f5e935aa4c18062ea9c0850830cdd7bce9f90f4526f93c77397efbe4e20c1833" Nov 25 21:43:10 crc kubenswrapper[4983]: I1125 21:43:10.321068 4983 scope.go:117] "RemoveContainer" containerID="5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182" Nov 25 21:43:10 crc kubenswrapper[4983]: E1125 21:43:10.321476 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:43:21 crc kubenswrapper[4983]: I1125 21:43:21.606360 4983 scope.go:117] "RemoveContainer" containerID="5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182" Nov 25 21:43:21 crc kubenswrapper[4983]: E1125 21:43:21.609112 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:43:33 crc kubenswrapper[4983]: I1125 21:43:33.606387 4983 scope.go:117] "RemoveContainer" containerID="5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182" Nov 25 21:43:33 crc kubenswrapper[4983]: E1125 21:43:33.607284 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150" Nov 25 21:43:46 crc kubenswrapper[4983]: I1125 21:43:46.604956 4983 scope.go:117] "RemoveContainer" containerID="5600709c3300143fa75ff017c9328f7e6ad63f294efe38ee5f0a21bb4615a182" Nov 25 21:43:46 crc kubenswrapper[4983]: E1125 21:43:46.605677 4983 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fqvg7_openshift-machine-config-operator(373cf631-46b3-49f3-af97-be8271ce5150)\"" pod="openshift-machine-config-operator/machine-config-daemon-fqvg7" podUID="373cf631-46b3-49f3-af97-be8271ce5150"